hiexam
amazon · AWS-Certified-AI-Practitioner-AIF-C01 · Q425 · multiple_choice · topic_1

A company wants to use language models to create an application for inference on edge devices. The inference must have…

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible. Which solution will meet these requirements?
  • A.Deploy optimized small language models (SLMs) on edge devices.
  • B.Deploy optimized large language models (LLMs) on edge devices.
  • C.Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
  • D.Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Explanation
Selected Answer: A A: Deploy optimized small language models (SLMs) on edge devices. Explanation: Deploying optimized small language models (SLMs) on edge devices ensures low latency because the inference happens directly on the device without relying on cloud communication. Small language models are lightweight and designed to run efficiently on devices with limited resources, making them ideal for edge computing.

Reference: examtopics_top_comment

Practice with progress tracking

Sign in to track wrong answers, get spaced-repetition reminders, and run timed exam mode.