gssar.com

Core Insights from Agentic AI Development

This document outlines key learnings and practical observations gained from building agentic artificial intelligence systems. (Acknowledgements to Anthropic’s “Building Effective Agents” for inspiration).

🤖 Workflows vs Agents: What’s the Difference?

First off, the article made a useful distinction:
a)  Workflow = Predefined, coded paths where LLMs and tools follow a fixed structure.

b)  Agents = LLMs that dynamically control their own process and tool usage. They make decisions on-the-fly based on the current state of the task.

That flexibility in agents comes at a price — more latency, more compute, and a bigger risk of things going wrong. So, it’s better to start simple and only go agentic when the problem truly calls for it.

🧱 Building Blocks of Agentic AI

At the heart of any agentic system is an augmented LLM — a language model powered up with tools, memory, and retrieval capabilities. The LLM isn’t just answering a prompt; it’s thinking, planning, retrieving info, and even calling APIs if needed.
If you’re building with this kind of system, the article recommends focusing on two key things:
a)   Tailor the LLM to your use case. Don’t just plug in ChatGPT and hope it’ll       figure things out. Tune it.
b)  Give it a clear, well-documented interface. Tools, inputs, and outputs should be well defined. Think of it like designing a UI for a really smart intern.

⚙️ Common Agentic Workflows

The article outlines five common patterns developers use to implement AI workflows

1. Prompt Chaining

Break down tasks into steps. Each LLM output feeds into the next step — like a pipeline.
  • 🕐 Use when: The task can be easily broken into fixed stages.
  • ✅ Benefit: Higher accuracy
  • ⚠️ Tradeoff: Slower, more latency

2. Routing

Route different types of inputs to different LLM prompts or toolsets.
  • 🕐 Use when: You can cleanly classify inputs (e.g., customer support queries).
  • ✅ Benefit: Specialized, accurate responses

3. Parallelization

Split tasks or run multiple variations of the same task simultaneously.
  • Sectioning: Divide one big task into parts.
  • Voting: Run the same task multiple times and choose the best.
  • 🕐 Use when: Speed matters or when multiple perspectives help improve results

3. Parallelization

Split tasks or run multiple variations of the same task simultaneously.
  • Sectioning: Divide one big task into parts.
  • Voting: Run the same task multiple times and choose the best.
  • 🕐 Use when: Speed matters or when multiple perspectives help improve results

4. Orchestrator + Workers

One central LLM breaks a task into subtasks and delegates to worker LLMs.
  • 🕐 Use when: You can’t predict the sub-tasks in advance.
  • ✅ Benefit: Super flexible.

5. Evaluator + Optimizer

One LLM does the work; another evaluates and gives feedback. Like writer vs editor.
  • 🕐 Use when: You have clear evaluation criteria and room for iteration.
  • ✅ Benefit: Higher quality outputs over time.

🤔 So What Is an Agent, Really?

An agent starts when you give it a command. It figures out what needs to be done, plans how to do it, gathers info at every step, and works independently toward a goal. The process ends when the task is complete (or it gives up after too many tries).
To be effective, agents need:
  • Understanding of complex inputs
  • Reasoning and planning skills
  • Reliable tool use and error recovery
But don’t be fooled by the fancy setup — at their core, agents are usually just LLMs running loops and calling tools based on feedback. What matters is how well you design the tools, environment, and safeguards.

🛠️ When to Use Agents (and When Not To)

Use agents when:
  • You’re solving open-ended problems.
  • You can’t predefine all the sub-tasks.
  • You trust the agent’s ability to make decisions.
  • Agents = higher costs + greater chance of compounding errors.
  • So, test them rigorously, preferably in sandboxed environments before deploying anything serious.
But also remember:

✨ Final Takeaway

The biggest thing I took away is this:

Success in building with LLMs isn’t about making the most complex system — it’s about making the right system.

Start small. Evaluate. Optimize. And only add complexity when the simple stuff doesn’t cut it.

Leave A Comment