AI Buzzwords Demystified

If you’ve been following AI developments lately, you’ve probably encountered terms like LLMs, RAG, ReAct, and AI Agents. While these technologies are transforming how we interact with AI, the terminology can be overwhelming. In this post, I’ll break down these concepts into digestible explanations with practical examples.

Let’s start with the foundation and progressively build up to more complex systems.



Large Language Models (LLMs): The Foundation


At the core of today’s AI revolution are Large Language Models (LLMs). Popular applications such as ChatGPT and Claude are built on top of these powerful models. They excel at generating and manipulating text based on the prompts we provide.

The basic operation is simple:

Input > LLM > Output

You ask a question or provide instructions, and the LLM generates a response based on patterns it learned during training. For example, if you ask an LLM to draft an email to Luke Skywalker inviting him for coffee, it can generate a contextually appropriate message.

However, LLMs have a fundamental limitation: they only know what they’ve been trained on. They don’t have access to:

  • Your personal calendar
  • Your company’s internal documents
  • Real-time information (like today’s weather)
  • The ability to perform actions in the world

This is where more sophisticated systems come into play.



AI Workflows: Adding Capabilities


AI workflows extend LLMs by connecting them to external data sources and tools. This allows them to access information beyond their training data and perform specific tasks.

Let’s continue with our example: What if you want to know when your next coffee chat with Luke is scheduled?

A basic LLM would fail at this task because it has no access to your calendar. But an AI workflow could:

Input > LLM > Calendar Search Query > Calendar Results > LLM > Output

The workflow:

  1. Interprets your question
  2. Recognizes it needs calendar information
  3. Searches your calendar
  4. Uses the results to formulate an answer

We can make this even more powerful. What if you also wanted to know the weather forecast for your meeting with Luke?

The workflow would expand:

Input > LLM > Calendar Search > Weather API Query > LLM > Output

This demonstrates the power of AI workflows: they can combine multiple data sources and tools to provide comprehensive responses that a standalone LLM couldn’t deliver.


Retrieval Augmented Generation (RAG): A Special Workflow


RAG is a specific type of AI workflow that’s become particularly important. It allows LLMs to search through and incorporate information from a knowledge base before generating responses.

The process works like this:

  1. You ask a question
  2. The system converts your question into a search query
  3. Relevant documents are retrieved from a database
  4. The LLM uses both your question and the retrieved information to generate an answer

This is especially valuable for companies that want AI systems that can reference their internal documentation, knowledge bases, or specialized information.

For example, a company might implement RAG to allow employees to query their internal policies:

"What's our policy on remote work during holidays?"

The RAG system would:

  1. Search the company’s policy documents
  2. Find relevant sections about remote work and holidays
  3. Generate a concise answer based on those specific policies


AI Agents: Adding Autonomy and Reasoning


While workflows are powerful, they still follow predetermined paths. AI agents take this a step further by adding autonomous reasoning and decision-making capabilities.

The term “ReAct” (Reasoning + Acting) captures this concept well. AI agents can:

  1. Reason about what actions to take
  2. Act based on their reasoning
  3. Observe the results
  4. Reason again based on new information

Let’s look at a practical example. Imagine a content creation workflow:

  1. Find relevant URLs on a topic
  2. Summarize the content from those URLs
  3. Create a LinkedIn post based on the summaries

In a traditional workflow, human intervention might be needed to verify the quality of the post before publishing. An AI agent system could eliminate this by adding a quality control step:

  1. Find relevant URLs (using Agent A)
  2. Summarize those URLs (using Agent A)
  3. Create a LinkedIn post draft (using Agent A)
  4. Evaluate if the post meets quality criteria (using Agent B)
  5. If not, revise the post until it does

This demonstrates a key aspect of AI agents: they can collaborate and check each other’s work, similar to how humans might review each other’s contributions.

The more reviewing agents, the more chance there is that it will be accurate.


Specialized AI Agents


The most powerful AI agent systems use specialized agents for specific tasks. Think of this as having a team of experts, each with their own specialty.

For example, a fraud detection system might include:

  • A data gathering agent that collects transaction information
  • A pattern recognition agent trained specifically on fraudulent transaction patterns
  • A risk assessment agent that evaluates potential impact, understanding the policies and risk position of your company
  • A notification agent that alerts appropriate personnel

Each agent is optimized for its specific task, creating a system that’s more effective than a single general-purpose agent would be.



Putting It All Together


Understanding the progression from LLMs to workflows to agents helps clarify how these systems become increasingly powerful:

  • LLMs provide the foundational ability to understand and generate human language
  • AI Workflows connect LLMs to external data and tools to expand their capabilities
  • AI Agents add autonomous reasoning and decision-making to create systems that can adapt and self-improve

As these technologies continue to evolve, we’ll see increasingly sophisticated applications that combine these approaches to solve complex problems.

The most exciting developments aren’t about any single technology, but rather how they can be integrated to create systems that augment human capabilities in ways we’re just beginning to explore.