# AWS-Certified-Data-Analytics---Specialty — Question 428

**Type:** multiple_choice
**Topics:** topic_1

## Question

An operations team notices that a few AWS Glue jobs for a given ETL application are failing. The AWS Glue jobs read a large number of small JSON files from an
Amazon S3 bucket and write the data to a different S3 bucket in Apache Parquet format with no major transformations. Upon initial investigation, a data engineer notices the following error message in the History tab on the AWS Glue console: `Command Failed with Exit Code 1.`
Upon further investigation, the data engineer notices that the driver memory profile of the failed jobs crosses the safe threshold of 50% usage quickly and reaches
90`"95% soon after. The average memory usage across all executors continues to be less than 4%.
The data engineer also notices the following error while examining the related Amazon CloudWatch Logs.
//IMG//

What should the data engineer do to solve the failure in the MOST cost-effective way?

## Correct Answer

_See scenario._

## Explanation

Bssed on the link, I will go for B

https://awsfeed.com/whats-new/big-data/optimize-memory-management-in-aws-glue

**Reference:** examtopics_top_comment

---
Source: https://hiexam.net/q/amazon/AWS-Certified-Data-Analytics---Specialty/428  
Practice (tracked): https://hiexam.net/study/AWS-Certified-Data-Analytics---Specialty/practice