amazon aws certified developer - associate dva-c02 practice test

Last exam update: Sep 01 ,2024
Page 1 out of 41. Viewing questions 1-10 out of 414

Question 1

An application under development is required to store hundreds of video files. The data must be encrypted within the application prior to storage, with a unique key for each video file.

How should the developer code the application?

  • A. Use the KMS Encrypt API to encrypt the data. Store the encrypted data key and data.
  • B. Use a cryptography library to generate an encryption key for the application. Use the encryption key to encrypt the data. Store the encrypted data.
  • C. Use the KMS GenerateDataKey API to get a data key. Encrypt the data with the data key. Store the encrypted data key and data.
  • D. Upload the data to an S3 bucket using server side-encryption with an AWS KMS key.
Answer:

c

User Votes:
A
50%
B
50%
C
50%
D
50%

None

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

An IT department uses Amazon S3 to store sensitive images. After more than 1 year, the company moves the images into archival storage. The company rarely accesses the images, but the company wants a storage solution that maximizes resiliency. The IT department needs access to the images that have been moved to archival storage within 24 hours.

Which solution will meet these requirements MOST cost-effectively?

  • A. Use S3 Standard-Infrequent Access (S3 Standard-IA) to store the images. Use S3 Glacier Deep Archive with standard retrieval to store and retrieve archived images.
  • B. Use S3 Standard-Infrequent Access (S3 Standard-IA) to store the images. Use S3 Glacier Deep Archive with bulk retrieval to store and retrieve archived images.
  • C. Use S3 Intelligent-Tiering to store the images. Use S3 Glacier Deep Archive with standard retrieval to store and retrieve archived images.
  • D. Use S3 One Zone-Infrequent Access (S3 One Zone-IA) to store the images. Use S3 Glacier Deep Archive with bulk retrieval to store and retrieve archived images.
Answer:

d

User Votes:
A
50%
B
50%
C
50%
D
50%

None

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A developer is configuring an application's deployment environment in AWS CodePipeline. The application code is stored in a GitHub repository. The developer wants to ensure that the repository package's unit tests run in the new deployment environment. The developer has already set the pipeline's source provider to GitHub and has specified the repository and branch to use in the deployment.

Which combination of steps should the developer take next to meet these requirements with the LEAST overhead? (Choose two.)

  • A. Create an AWS CodeCommit project. Add the repository package's build and test commands to the project's buildspec.
  • B. Create an AWS CodeBuild project. Add the repository package's build and test commands to the project's buildspec.
  • C. Create an AWS CodeDeploy project. Add the repository package's build and test commands to the project's buildspec.
  • D. Add an action to the source stage. Specify the newly created project as the action provider. Specify the build artifact as the action's input artifact.
  • E. Add a new stage to the pipeline after the source stage. Add an action to the new stage. Specify the newly created project as the action provider. Specify the source artifact as the action's input artifact.
Answer:

bd

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

None

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 4

An application that runs on AWS receives messages from an Amazon Simple Queue Service (Amazon SQS) queue and processes the messages in batches. The application sends the data to another SQS queue to be consumed by another legacy application. The legacy system can take up to 5 minutes to process some transaction data.

A developer wants to ensure that there are no out-of-order updates in the legacy system. The developer cannot alter the behavior of the legacy system.

Which solution will meet these requirements?

  • A. Use an SQS FIFO queue. Configure the visibility timeout value.
  • B. Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the DelaySeconds values.
  • C. Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the visibility timeout value.
  • D. Use an SQS FIFO queue. Configure the DelaySeconds value.
Answer:

a

User Votes:
A
50%
B
50%
C
50%
D
50%

None

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

A developer is setting up a deployment pipeline. The pipeline includes an AWS CodeBuild build stage that requires access to a database to run integration tests. The developer is using a buildspec.yml file to configure the database connection. Company policy requires automatic rotation of all database credentials.

Which solution will handle the database credentials MOST securely?

  • A. Retrieve the credentials from variables that are hardcoded in the buildspec.yml file. Configure an AWS Lambda function to rotate the credentials.
  • B. Retrieve the credentials from an environment variable that is linked to a SecureString parameter in AWS Systems Manager Parameter Store. Configure Parameter Store for automatic rotation.
  • C. Retrieve the credentials from an environment variable that is linked to an AWS Secrets Manager secret. Configure Secrets Manager for automatic rotation.
  • D. Retrieve the credentials from an environment variable that contains the connection string in plaintext. Configure an Amazon EventBridge event to rotate the credentials.
Answer:

a

User Votes:
A
50%
B
50%
C
50%
D
50%

None

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A developer created an AWS Lambda function that performs a series of operations that involve multiple AWS services. The function's duration time is higher than normal. To determine the cause of the issue, the developer must investigate traffic between the services without changing the function code.

Which solution will meet these requirements?

  • A. Enable AWS X-Ray active tracing in the Lambda function. Review the logs in X-Ray.
  • B. Configure AWS CloudTrail. View the trail logs that are associated with the Lambda function.
  • C. Review the AWS Config logs in Amazon CloudWatch.
  • D. Review the Amazon CloudWatch logs that are associated with the Lambda function.
Answer:

d

User Votes:
A
50%
B
50%
C
50%
D
50%

None

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

A developer is investigating an issue in part of a company's application. In the application, messages are sent to an Amazon Simple Queue Service (Amazon SQS) queue. The AWS Lambda function polls messages from the SQS queue and sends email messages by using Amazon Simple Email Service (Amazon SES). Users have been receiving duplicate email messages during periods of high traffic.

Which reasons could explain the duplicate email messages? (Choose two.)

  • A. Standard SQS queues support at-least-once message delivery.
  • B. Standard SQS queues support exactly-once processing, so the duplicate email messages are because of user error.
  • C. Amazon SES has the DomainKeys Identified Mail (DKIM) authentication incorrectly configured.
  • D. The SQS queue's visibility timeout is lower than or the same as the Lambda function's timeout.
  • E. The Amazon SES bounce rate metric is too high.
Answer:

ad

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

None

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 8

A company has on-premises data centers that run an image processing service. The service consists of containerized applications that run on Kubernetes clusters. All the applications have access to the same NFS share for files and data storage.

The company is running out of NFS capacity in the data centers and needs to migrate to AWS as soon as possible. The Kubernetes clusters must be highly available on AWS.

Which combination of actions will meet these requirements? (Choose two.)

  • A. Transfer the information that is in the NFS share to an Amazon Elastic Block Store (Amazon EBS) volume. Upload the container images to Amazon Elastic Container Registry (Amazon ECR).
  • B. Transfer the information that is in the NFS share to an Amazon Elastic File System (Amazon EFS) volume. Upload the container images to Amazon Elastic Container Registry (Amazon ECR).
  • C. Create an Amazon Elastic Container Service (Amazon ECS) cluster to run the applications. Configure each node of the cluster to mount the Amazon Elastic Block Store (Amazon EBS) volume at the required path for the container images.
  • D. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to run the applications. Configure each node of the cluster to mount the Amazon Elastic Block Store (Amazon EBS) volume at the required path for the container images.
  • E. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to run the applications. Configure each node of the cluster to mount the Amazon Elastic File System (Amazon EFS) volume at the required path for the container images.
Answer:

a e

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

None

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 9

A company has an application that runs as a series of AWS Lambda functions. Each Lambda function receives data from an Amazon Simple Notification Service (Amazon SNS) topic and writes the data to an Amazon Aurora DB instance.

To comply with an information security policy, the company must ensure that the Lambda functions all use a single securely encrypted database connection string to access Aurora.

Which solution will meet these requirements?

  • A. Use IAM database authentication for Aurora to enable secure database connections for all the Lambda functions.
  • B. Store the credentials and read the credentials from an encrypted Amazon RDS DB instance.
  • C. Store the credentials in AWS Systems Manager Parameter Store as a secure string parameter.
  • D. Use Lambda environment variables with a shared AWS Key Management Service (AWS KMS) key for encryption.
Answer:

d

User Votes:
A
50%
B
50%
C
50%
D
50%

None

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

A company has an existing application that has hardcoded database credentials. A developer needs to modify the existing application. The application is deployed in two AWS Regions with an active-passive failover configuration to meet companys disaster recovery strategy.

The developer needs a solution to store the credentials outside the code. The solution must comply with the companys disaster recovery strategy.

Which solution will meet these requirements in the MOST secure way?

  • A. Store the credentials in AWS Secrets Manager in the primary Region. Enable secret replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.
  • B. Store credentials in AWS Systems Manager Parameter Store in the primary Region. Enable parameter replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.
  • C. Store credentials in a config file. Upload the config file to an S3 bucket in the primary Region. Enable Cross-Region Replication (CRR) to an S3 bucket in the secondary region. Update the application to access the config file from the S3 bucket, based on the Region.
  • D. Store credentials in a config file. Upload the config file to an Amazon Elastic File System (Amazon EFS) file system. Update the application to use the Amazon EFS file system Regional endpoints to access the config file in the primary and secondary Regions.
Answer:

a

User Votes:
A
50%
B
50%
C
50%
D
50%

None

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2