Snowflake snowpro advanced data engineer practice test

snowpro advanced data engineer

Last exam update: Oct 11 ,2024
Page 1 out of 6. Viewing questions 1-10 out of 65

Question 1

A Data Engineer is implementing a near real-time ingestion pipeline to load data into Snowflake using the Snowflake Kafka connector. There will be three Kafka topics created.
Which Snowflake objects are created automatically when the Kafka connector starts? (Choose three.)

  • A. Tables
  • B. Tasks
  • C. Pipes
  • D. Internal stages
  • E. External stages
  • F. Materialized views
Answer:

acd

User Votes:
A 1 votes
50%
B
50%
C 1 votes
50%
D 1 votes
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 2

What is the purpose of the BUILD_STAGE_FILE_URL function in Snowflake?

  • A. It generates an encrypted URL for accessing a file in a stage.
  • B. It generates a staged URL for accessing a file in a stage.
  • C. It generates a permanent URL for accessing files in a stage.
  • D. It generates a temporary URL for accessing a file in a stage.
Answer:

c

User Votes:
A
50%
B
50%
C 1 votes
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A Data Engineer needs to load JSON output from some software into Snowflake using Snowpipe.
Which recommendations apply to this scenario? (Choose three.)

  • A. Load large files (1 GB or larger).
  • B. Ensure that data files are 100-250 MB (or larger) in size, compressed.
  • C. Load a single huge array containing multiple records into a single table row.
  • D. Verify each value of each unique element stores a single native data type (string or number).
  • E. Extract semi-structured data elements containing null values into relational columns before loading.
  • F. Create data files that are less than 100 MB and stage them in cloud storage at a sequence greater than once each minute.
Answer:

bde

User Votes:
A
50%
B 1 votes
50%
C
50%
D 1 votes
50%
E 1 votes
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 4

A company has an extensive script in Scala that transforms data by leveraging DataFrames. A Data Engineer needs to move these transformations to Snowpark.
What characteristics of data transformations in Snowpark should be considered to meet this requirement? (Choose two.)

  • A. It is possible to join multiple tables using DataFrames.
  • B. Snowpark operations are executed lazily on the server.
  • C. User-Defined Functions (UDFs) are not pushed down to Snowflake.
  • D. Snowpark requires a separate cluster outside of Snowflake for computations.
  • E. Columns in different DataFrames with the same name should be referred to with squared brackets.
Answer:

ab

User Votes:
A 1 votes
50%
B 1 votes
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 5

Which output is provided by both the SYSTEM$CLUSTERING_DEPTH function and the SYSTEM$CLUSTERING_INFORMATION function?

  • A. average_depth
  • B. notes
  • C. average_overlaps
  • D. total_partition_count
Answer:

a

User Votes:
A 1 votes
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

Given the table SALES which has a clustering key of column CLOSED_DATE, which table function will return the average clustering depth for the SALES_REPRESENTATIVE column for the North American region?

  • A. select system$clustering_information('Sales', 'sales_representative', 'region = ''North America''');
  • B. select system$clustering_depth('Sales', 'sales_representative', 'region = ''North America''');
  • C. select system$clustering_depth('Sales', 'sales_representative') where region = 'North America';
  • D. select system$clustering_information('Sales', 'sales_representative') where region = 'North America;
Answer:

b

User Votes:
A 1 votes
50%
B 1 votes
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

Which Snowflake objects does the Snowflake Kafka connector use? (Choose three.)

  • A. Pipe
  • B. Serverless task
  • C. Internal user stage
  • D. Internal table stage
  • E. Internal named stage
  • F. Storage integration
Answer:

ade

User Votes:
A 1 votes
50%
B 1 votes
50%
C
50%
D 1 votes
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 8

A table is loaded using Snowpipe and truncated afterwards. Later, a Data Engineer finds that the table needs to be reloaded, but the metadata of the pipe will not allow the same files to be loaded again.
How can this issue be solved using the LEAST amount of operational overhead?

  • A. Wait until the metadata expires and then reload the file using Snowpipe.
  • B. Modify the file by adding a blank row to the bottom and re-stage the file.
  • C. Set the FORCE=TRUE option in the Snowpipe COPY INTO command.
  • D. Recreate the pipe by using the CREATE OR REPLACE PIPE command.
Answer:

c

User Votes:
A
50%
B
50%
C
50%
D 1 votes
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
darkonimbus
2 months, 1 week ago

FORCE =TRUE isn't supported for Snowpipe, as specified in the Usage Notes: https://docs.snowflake.com/en/sql-reference/sql/create-pipe#usage-notes


Question 9

The following is returned from SYSTEM$CLUSTERING_INFORMATION() for a table named ORDERS with a DATE column named O_ORDERDATE:

What does the total_constant_partition_count value indicate about this table?

  • A. The table is clustered very well on O_ORDERDATE, as there are 493 micro-partitions that could not be significantly improved by reclustering.
  • B. The table is not clustered well on O_ORDERDATE, as there are 493 micro-partitions where the range of values in that column overlap with every other micro-partition in the table.
  • C. The data in O_ORDERDATE does not change very often, as there are 493 micro-partitions containing rows where that column has not been modified since the row was created.
  • D. The data in O_ORDERDATE has a very low cardinality, as there are 493 micro-partitions where there is only a single distinct value in that column for all rows in the micro-partition.
Answer:

a

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

What is a characteristic of the use of external tokenization?

  • A. Secure data sharing can be used with external tokenization.
  • B. External tokenization cannot be used with database replication.
  • C. Pre-loading of unmasked data is supported with external tokenization.
  • D. External tokenization allows the preservation of analytical values after de-identification.
Answer:

d

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2