Issue #40 – Defining AI Solutions as Simply as Possible
LLMs? RAGs? Agents? AI is confusing! Take ten minutes to understand it all without the technical jargon
Read time: 12 minutes
It seems like most people don’t remember the Dot Com Bubble.
I was only ten years old, but the lessons still resonate with me. People got hyped about the Internet's potential and launched every conceivable Internet idea—money subsequentially poured in from everywhere.
Then, the reality of technological progress hit, and everything crashed!
Why do I bring this up? Well, we are going through the same thing with AI.
If you replace the Internet with AI, you have today’s Dot Com Bubble: hundreds of new start-ups, ideas, and dreams driven by the hype of technology rather than its realistic abilities.
It has been 2.5 years since ChatGPT was released, and the world has gone into an AI craze. Since 2022, billions of dollars have been invested in AI technology, a lot of which has delivered positive results but most of which has been wasted.
But all through this hype cycle, one thing still rings true—most people don’t understand AI!
We have actually been using the word so much it doesn’t feel real anymore (called Semantic satiation, explained well by Carly Taylor in her blog). Even ‘data and AI experts’ don’t necessarily understand the technology underpinning these solutions.
Worst of all, business and data leaders are still expecting AI to live up to its hype without putting in the work (hint: You can’t replace human workers, speed up operational processes, and deliver positive business results if you don’t approach AI strategically).
So in this article of the Data Ecosystem, we are going to dive into what AI really is:
What does AI mean?
What types of AI solutions exist?
How do you use them effectively?
Hopefully, this will start to give people something tangible to understand AI rather than a vague word that means so many meaningless things…
Overview of Artificial Intelligence
Before we explore the specific business use cases, we need to approach Artificial Intelligence at the highest level and explain what it means from a theoretical perspective.
Let’s go back to my earlier definition of AI from our first Machine Learning & AI article on the basics:
AI (Artificial Intelligence) – The ability of computers to ‘think’, mimicking human intelligence through algorithms (often leveraging Machine Learning) and behaving in a way that resembles an intelligent human.
With that in mind, what does the ability of computers to think actually mean? Current AI theorists and philosophers break this down into two classification systems:
Type 1 AI – AI categorised by capability
Type 2 AI – AI categorised by functionality
Capability-based classification originates primarily from the AI research and philosophy communities, focusing on Artificial Intelligence's ultimate state based on its potential and limitations. Rather than practical applications of AI, these definitions (and framework for AI progress) outline the horizon of what AI might achieve and the potential societal implications as AI advances. These terms are used at a high level to describe the capability of types of AI, beneficial for long-term planning, risk assessment, and ethical discussions. There are three stages of capability-based AI:
Artificial Narrow Intelligence (ANI) is the phase we are at today. These AI systems are designed to excel at specific tasks but lack the broader understanding to go beyond their domain. These AI systems have use cases with limited and pre-defined parameters, constraints and contexts. Even an LLM like ChatGPT or Claude AI falls in this category as its outputs are contained to the human oversight/ direction that the user provides it.
Artificial General Intelligence (AGI) is a system with human-equivalent capabilities across virtually all cognitive tasks. This concept of Artificial Intelligence does not yet exist in practice, but it would be able to understand, learn, and apply knowledge across domains without specific training for each task. Imagine an AI that can act just like a human with context switching and completing tasks without prompts. The implications of this would be significant in terms of technological progress, both from a societal and practical perspective.
Artificial Super Intelligence (ASI) describes hypothetical AI systems that surpass the most intelligent human minds across all fields, from scientific creativity to social skills. Given AI’s current limitations in understanding situations and context, you can see how this remains in the realm of speculation. However, the implications of ASI are incredibly profound; for example, should machines be as smart as us? What ethical implications might that have? How would that impact society and life as we know it?
The other form of classification is functionality-based. This focuses on how AI systems work from a cognitive science and engineering perspective. Here we get into how AI systems process information or interact with their environment, providing a more technical roadmap of advancement. This framework is especially valuable for developers and business leaders to understand the practical requirements and limitations of different AI implementations. There are four levels of classification:
Reactive Machines are the simplest form of AI. They respond to inputs without any memory of past interactions or ability to learn from them. They are built on present data using pre-programmed rules, providing consistent, predictable results for specific use cases.
Limited Memory AI is the next step and what you find in many modern AI systems. These systems learn from historical data and experiences, adapting their responses based on new information. However, it cannot retain the data to use over a long-term period, similar to how your LLM conversations are limited to a number of interactions. Most of what you see today is a Limited Memory AI system, from GenAI to virtual assistants/ chatbots to self-driving cars.
Theory of Mind AI is still a theoretical application. It refers to systems that can understand its audience’s (usually human) thoughts, beliefs, and intentions, allowing it to recognise emotions, infer motives/ reasoning and react in turn. This type of AI is close to AGI, with huge potential in the customer service, education, and healthcare industries, given the genuine understanding of human needs and emotions. You could argue that existing LLMs are getting close to theory of mind AI, as the chatbot does resonate with your emotive expressions, but it isn’t quite there yet.
Self-aware AI gets into the ASI range, and is still the furthest theoretical version of AI. Self-Aware AI deals with systems that have consciousness and self-awareness similar to humans. This would mean AI with subjective experiences and a sense of self, privy to its own emotions, needs and beliefs. This is still very far away and very much in an ethical grey zone.
To be blunt, you don’t need to know the capability- and functionality-based terminology of AI to actually use it or implement it, but its helpful to know the journey that we are on from a theoretical perspective, as it helps us understand what AI means in the grand scheme of things. In the next section, we will get more into the practical use cases and builds of AI.
Grouping Today’s Practical Applications of AI
Even as an experienced Data Strategist, I often get confused about AI's practical applications.
So much hype around terms like GenAI, RAG, or Agentic AI has distorted my view (and probably others') of what other types of AI exist and how they are relevant for different use cases.
So this section is about breaking it all down; I want to dispel the notion that AI is just a chatbot or an internal element of some SaaS software.
I’ve classified four approaches to AI based on their primary functions, how we interact with them and their business applications.

Note this will probably shift and not every AI expert will agree on the classification, but based on my research, this is my best attempt at delineating AI into different groupings:
1. Language-Based AI
AI systems that process, understand, and generate human language, interacting with users via text and speech methods. These models often make up what we think of as GenAI.
With the advent of ChatGPT and the enormous amount of language-based data freely available on the Internet, these are often the most popular AI systems:
Large Language Models (LLMs) are the most common form of AI system, and are often referred to as foundational models. The foundational aspect comes from the fact that LLMs are built on massive amounts of data, allowing the system to recognize patterns and derive the context from relevant documentation. Based on the prompt the user provides in the LLM input area (usually through a chatbot), the system generates an answer with a human-like language response. With additional instructions, these models can provide specific answers, increasing the accuracy and usability of their outputs. These systems are great for content creation, analysis, and coding assistance, but can provide whatever language-based answer the user requests.
Retrieval-Augmented Generation (RAG) is like supercharging the usefulness of the LLM. Most LLMs are trained on an immense amount of training data, which may not be highly relevant for the use case in question. A RAG engine adds onto the LLM, pulling in relevant information from your company's documents or databases to improve the context of the data. This allows the AI system to prioritise your business's specific details and policies within the response, ensuring it’s not based on just general knowledge. Decoding ML has a great article on this if you want to learn more.
2. Perception-Based AI
Systems that interpret and analyse sensory inputs from the physical world. These include images, video, audio, and other sensor data, bridging the gap between digital systems and the physical environment.
As sensor-based technology (including wearables, IOT, edge computing, etc.) becomes more ubiquitous, the role of perception-based AI to understand and process events in the physical world will only increase:
Computer Vision Models focus on image recognition, classifying objects based on large amounts of visual data. The AI system uses neural networks and images, videos and other visual inputs to determine what objects look like and the environment they may be found in, allowing for easy detection. This kind of AI has vastly improved the ability of manufacturers to improve quality control through defect detection, allowing self-driving cars to navigate the streets or identify anomalies within medical imaging. In addition to object recognition, the latest systems can understand complex scenes and generate new images from descriptions.
Speech Recognition/ Generation is everywhere and AI has superpowered its usefulness. These systems learn from speech pattern data to understand what people are saying. Simple AI systems focused on transcribing or recognizing commands (eg Siri or Alexa), but more complex models now understand context and engage in natural conversation. Given the immense nuance in sentence creation, accents, and dialect, the ability of these systems to factor context into its output improves accuracy and usefulness immensely.
3. Decision-Optimisation AI
These embedded systems use AI models and data to make or recommend optimal choices. A lot of popular AI is customer-facing, whereas these models tend to work in the background, contextualising complex situations where multiple variables and constraints exist to determine the ideal scenario where a decision must be made.
Think of this type as existing within all those technologies who market themselves as ‘AI-enabled’. Note I’m not referring to Embedded AI, which is a common term for AI within edge devices (Embedded AI is more about the placement rather than the type). This type of system excels at finding the best possible solutions to deliver tangible business value and is sort of an extension of machine learning models:
Predictive AI uses machine learning methods to forecast future outcomes and trends, identifying optimal situations based on pre-determined criteria. Most of these systems would likely fall in the realm of Machine learning, but AI systems might go beyond ML models to solve more broad and complex problems, with the AI system more autonomous in learning on its own.
Recommendation Engines use ML models to find patterns in user data and recommend relevant items, products, services, etc. Similar to Predictive AI, the sophistication of these systems has grown extensively over the past decade. Past ML models focused on current actions, but AI can identify previously undiscovered patterns that the system detects from how users interact with the technology. There are many types of recommender engines like collaborative filtering, content-based filtering, and a hybrid recommendation system.
Check out my last article if you want more knowledge about the different types of ML models.
4. Autonomous AI Systems
Next-generation systems that perceive their environment, make decisions, and take actions with minimal human intervention. Within these systems, algorithms are designed to learn, adapt, and execute tasks autonomously, which marks a difference from some of the previous categories that still have some level of human intervention involved.
While I’ve listed this category as unique from the previous, these AI systems still utilise elements of all three previous groups. For example, you will see the use of LLMs, computer vision or decision-making AI within these autonomous systems, but what is being built might resemble a different solution as it combines the elements:
AI Agents are the latest rage in the AI hype world. These systems are built on foundational models (like LLMs) and combined with the ability to access tools (often external sources) and use their own learned experience (or memory) to produce a more contextually relevant response. The tools and memory are the key components that make AI Agents autonomous and extremely useful. These elements enhance its reasoning capability and allow it to adapt to changing environments, rules or learning from interactions. This builds on most LLM chatbots, which are fairly static in their abilities to adapt and don’t have decision-making abilities beyond their initial programmed rules. I highly recommend reading Aurimas Griciūnas’s AI Agents newsletter series, as this goes into way more detail and actually helps you build one.
This is a great, simple diagram Aurimas Griciūnas built about AI Agents. Source: www.newsletter.swirlai.com/p/building-ai-agents-from-scratch-part Intelligent Process Automation (IPA) are systems that incorporates AI to manage and automate digital processes. This builds upon both Robotic and Digital Process Automation (RPA and DPA) which are types of task automation software. Where IPA differs is in its ability to take over for human decision making and increase its scope of work. Instead of rule-based processes, IPA uses predictive AI or other foundational models to deal with complex situations that my fall outside initial RPA/ DPA programming.
Approaching AI Intelligently
The landscape of AI is vast and evolving rapidly, which can make it overwhelming for executives trying to determine where to invest. I’ve spent hours researching and writing this article and I even feel like I’m barely scratching the surface!
So as a CEO, business executive, data lead, or practitioner, its important to not just chase buzzwords or trying to replicate high-profile AI implementations.
Instead, focus on matching specific AI capabilities to your most pressing business challenges.
Based on the four categories of AI we talked about, here is how you match it with specific opportunities:
Language-Based AI (LLMs, RAG) should address knowledge work inefficiencies or gaps—use it to transform customer service operations, streamline document processing, or enhance knowledge workflows
Perception-Based AI (Computer Vision, Speech Recognition) is about converting sensory into actionable insights—ideal for quality control, security monitoring, or products that require multi-sensory interfaces
Decision-Optimization AI (Predictive Analytics, Recommendation Engines) should be embedded into your complex decision processes, potentially building off of existing ML models/ solutions—deploy it for scenarios where you need to optimise inventory, or personalise customer situations, or assess financial risk
Autonomous Systems AI (Process Automation, AI Agents) is a more onerous step that is meant to reduce manual intervention—consider it for logistics improvement, document processing, or consistent execution of complex workflows.
Remember: these categories aren't mutually exclusive in practice. Many successful AI implementations combine multiple approaches. For example, a customer service AI might use speech recognition to understand queries, language models to process them, and autonomous models to automate decision-making.
The most important thing is to start with your business problem and work backwards to the technology, not the other way around. AI may not even be the right answer, and you may start with more simple descriptive and diagnostic analytics to ensure that you are solving in the right way.
Last year I did an article on how to think of AI more holistically, so if you want to figure out how to implement AI, I suggest giving it a read.
The next issue will be a continuation of the ML and AI series but focusing on the production process for ML models. This is a highly subjective topic as there are so many different approaches to building, testing and productionising Machine Learning.
Next week, I try my best to create a more manageable, overarching approach, while considering the different nuances present in how people build ML and AI models. Until then, have a great week and thanks for the read!
Thanks for the read! Comment below and share the newsletter/ issue if you think it is relevant! Feel free to also follow me on LinkedIn (very active) or Medium (increasingly active). See you amazing folks next week!
https://open.substack.com/pub/ayushgoenka/p/hidden-and-socially-accepted-gender?r=5fbpqp&utm_medium=ios
Always a great read! This field is evolving so quickly that it's overwhelming to keep up with everything. Thank you for simplifying it.