Skip to main content

One post tagged with "metrics"

View All Tags

· 3 min read
Isaac Chung

How do you measure the performance of LLM serving systems? Production services in engineering are often evaluated using metrics like Requests Per Second (RPS), uptime, and latency. In computer vision systems, Frames Per Second (FPS) is often used as the main model throughput metric for use cases that involve near-real-time detection and tracking. Does serving LLMs have something similar? Certainly. A recent conversation with my team (after they read my Ollama blog) got me thinking about additional metrics that we could be tracking.

Key questions I'll address are:
  • What metrics does Ollama provide?
  • Why can't I just use tokens per second?
  • What other LLM serving metrics should I consider?