# AI-100 — Question 426

**Type:** multiple_choice
**Topics:** topic_1

## Question

You are designing an AI solution in Azure that will perform image classification.
You need to identify which processing platform will provide you with the ability to update the logic over time. The solution must have the lowest latency for inferencing without having to batch.
Which compute target should you identify?

## Correct Answer

_See scenario._

## Explanation

FPGAs, such as those available on Azure, provide performance close to ASICs. They are also flexible and reconfigurable over time, to implement new logic.
Incorrect Answers:
D: ASICs are custom circuits, such as Google's TensorFlow Processor Units (TPU), provide the highest efficiency. They can't be reconfigured as your needs change.
References:
https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas

**Reference:** examtopics_answer_description

---
Source: https://hiexam.net/q/microsoft/AI-100/426  
Practice (tracked): https://hiexam.net/study/AI-100/practice