hiexam
amazon · AWS-Certified-Machine-Learning-Engineer---Associate-MLA-C01 · Q428 · multiple_choice · topic_1

A company plans to deploy an ML model for production inference on an Amazon SageMaker endpoint. The average inference p…

A company plans to deploy an ML model for production inference on an Amazon SageMaker endpoint. The average inference payload size will vary from 100 MB to 300 MB. Inference requests must be processed in 60 minutes or less. Which SageMaker inference option will meet these requirements?
  • A.Serverless inference
  • B.Asynchronous inference
  • C.Real-time inference
  • D.Batch transform
Practice with progress tracking

Sign in to track wrong answers, get spaced-repetition reminders, and run timed exam mode.