hiexam
microsoft · AI-100 · Q426 · multiple_choice · topic_1

You are designing an AI solution in Azure that will perform image classification. You need to identify which processing…

You are designing an AI solution in Azure that will perform image classification. You need to identify which processing platform will provide you with the ability to update the logic over time. The solution must have the lowest latency for inferencing without having to batch. Which compute target should you identify?
  • A.graphics processing units (GPUs)
  • B.field-programmable gate arrays (FPGAs)
  • C.central processing units (CPUs)
  • D.application-specific integrated circuits (ASICs)
Explanation
FPGAs, such as those available on Azure, provide performance close to ASICs. They are also flexible and reconfigurable over time, to implement new logic. Incorrect Answers: D: ASICs are custom circuits, such as Google's TensorFlow Processor Units (TPU), provide the highest efficiency. They can't be reconfigured as your needs change. References: https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas

Reference: examtopics_answer_description

Practice with progress tracking

Sign in to track wrong answers, get spaced-repetition reminders, and run timed exam mode.