# AWS-Certified-AI-Practitioner-AIF-C01 — Question 425

**Type:** multiple_choice
**Topics:** topic_1

## Question

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.
Which solution will meet these requirements?

## Correct Answer

_See scenario._

## Explanation

Selected Answer: A
A: Deploy optimized small language models (SLMs) on edge devices.

Explanation:
Deploying optimized small language models (SLMs) on edge devices ensures low latency because the inference happens directly on the device without relying on cloud communication. Small language models are lightweight and designed to run efficiently on devices with limited resources, making them ideal for edge computing.

**Reference:** examtopics_top_comment

---
Source: https://hiexam.net/q/amazon/AWS-Certified-AI-Practitioner-AIF-C01/425  
Practice (tracked): https://hiexam.net/study/AWS-Certified-AI-Practitioner-AIF-C01/practice