This page gives a variety of options to use Generative AI Technology. While the page is titled Building Gen AI applications, most of these options are without need to write code.
- Chat Interface (Browser)
- Chat Interface (Desktop/Mobile Apps)
- AI Workflows
- Custom Chatbots
- AI Agents
- Multi-Agent Collaboration (Agentic AI)
- Products with LLMs embedded
- Framework to install LLMs locally
- Programmatic Access to LLMs
Chat Interface (Browser)
Users interact directly with LLMs. Eg. ChatGPT, Gemini, Claude, Perplexity accessing respective LLMs.
User gives a prompt in the chat interface and gets response from these Chat interfaces on browser. The prompt is passed on to LLMs suitably and the response received from LLMs is returned back to user
Chat Interface (Desktop/Mobile Apps)
Users interact with Desktop/Mobile Applications which in turn connect to the LLMs. Eg. ChatGPT app for mobile etc.
User gives a prompt in the chat interface provided by these applications and gets response from these applications. The application passes the prompt on to LLMs suitably and the response received from LLMs is returned back to user.
AI Workflows
An AI workflow is a sequence of steps that combines AI models, data processing, and automation tasks to achieve a specific outcome. It’s like a pipeline — inputs go in, multiple stages process them, and outputs come out. Here, the workflow is fixed — the system follows these steps in the same order every time. Tools like n8n, zapier and make.com can be used for this purpose.
Example: Resume Screening Workflow
- Input: Upload resumes
- Step 1: Extract text from resumes (OCR / parser).
- Step 2: Use an AI model to match resumes to job description.
- Step 3: Rank candidates based on score.
- Step 4: Send shortlisted candidates to recruiter.
Custom Chatbot Applications
API/SDK interface: Applications which connect to LLMs using REST APIs directly. Alternatively applications which connect to LLMs through SDK (eg. python modules) which provide functions.
Programmatic Chatbots: Applications which are developed using APIs/SDK to LLMs.
Retrieval Augmented Generation (RAG) Applications: Applications which use knowledge specific to a context and APIs to general purpose LLMs for Language understanding and generation. Description of RAG and Sample Implementation.
Low-code/No-code Chatbots: Applications which serve as Chatbots with context specific knowledge and built without coding. Illustration using Chatbase and Copilot.
AI Agents
An AI agent is like an autonomous problem-solver that can plan, decide, and adapt steps on the fly to achieve a goal. It doesn’t just follow a fixed script — it figures out what to do next based on the situation.
Examples
Resume Screening AI Agent
- Understands the recruiter’s instructions (“Find best 5 candidates for Data Scientist role under ₹12 LPA”).
- Chooses the right tools (resume parser, job-matching AI).
- Adjusts the selection criteria if too few candidates match.
- Sends results and proactively suggests other candidates in related roles.
Virtual travel assistant on your phone.
- Perceives: Reads your request (“Book me a flight to Delhi tomorrow under ₹5,000”).
- Decides: Searches multiple airlines, checks timings, and filters by price.
- Acts: Books the best matching flight and sends you the ticket.
It’s “smart” because it’s not just following a fixed script — it uses input from the environment (your request, airline data, current
| Feature | Chatbot | AI Agent |
|---|---|---|
| Primary Role | Engage in conversation and provide answers or guidance. | Achieve a goal by perceiving, deciding, and taking actions. |
| Nature | Mostly reactive – responds to user queries. | Proactive – can initiate actions based on triggers or monitoring. |
| Decision-Making | Limited; often follows pre-defined rules or scripts. | Uses reasoning, planning, and sometimes learning to decide next steps. |
| Capabilities | Text or voice conversation within fixed scope. | Can chain multiple actions, use tools, call APIs, search data, and operate systems. |
| Automation Level | Low – mainly guides the user to act. | High – completes tasks end-to-end with minimal user intervention. |
| Adaptability | Rarely adapts unless reprogrammed. | Can adapt or improve from feedback and results. |
| Example | “What’s the weather tomorrow?” → Replies with forecast. | “Plan my weekend trip.” → Finds best travel dates, books tickets, reserves hotel, and sends itinerary. |
| Aspect | AI Workflow | AI Agent |
|---|---|---|
| Nature | Pre-defined sequence of steps. | Dynamic, adaptive decision-making. |
| Flexibility | Low – follows same steps each time. | High – can change approach based on context. |
| Control Flow | Human designs the step-by-step process. | Agent decides the steps needed at runtime. |
| Initiation | Usually triggered by a user or scheduled event. | Can be triggered by environment changes, goals, or proactively. |
| Example | Fixed pipeline for classifying images. | AI assistant that identifies, categorizes, and even requests missing data before classification. |
Agentic AI
Agentic AI refers to multiple AI agents capable of acting autonomously, taking initiative, and achieving goals without constant human supervision.
Products with LLMs embedded
This is a category of applications which are products with LLMs embedded in them. For example: Office applications having Copilot.
Copilot in Word
- Draft content instantly: Say “Write a professional cover letter for a marketing role” or “Create a summary of this document,” and Copilot will generate it for you.
- Rewrite and refine: Highlight any text and ask Copilot to “Make this more concise,” “Add a persuasive tone,” or “Translate to French.”
- Summarize documents: Copilot can scan long text and give you a bullet-point summary or executive overview.
- Generate tables and lists: Ask it to “Create a comparison table of electric cars” or “List key points from this article.”
- Insert images and formatting: You can request visuals or ask Copilot to format your document for clarity and style.
Framework to install LLMs locally
What is Ollama?
Ollama is not a large language model (LLM) itself. Instead, it’s an open-source platform or runtime tool that lets you download, manage, and run various LLMs locally on your own computer—much like Docker does for containers.
It’s built to work on macOS, Windows, and Linux, offering a command-line interface (CLI) (and more recently, a Windows GUI) to easily interact with models like Mistral, Llama, Gemma, and others.
Ollama vs LLM: What’s the Difference?
| Ollama | LLM (e.g. Llama, Mistral) |
|---|---|
| Platform/runtime tool | The actual language model |
| Manages model download & execution | Performs text generation/comprehension tasks |
| Provides CLI (and GUI on Win11) | Models are executed via Ollama |
| Supports quantization techniques | N/A—quantization handled by Ollama for efficiency |
Key Capabilities of Ollama
- Model Management: You can browse a library of LLMs, pull and run them—all via simple CLI commands like
ollama pull model-name,ollama run, andollama listOllama. - Local Execution: Everything runs on your machine—no need for cloud access, which means better privacy, offline functionality, and lower latency.
Ollama command line
# run the model (or) download if it is not already downloaded.
ollama run tinyllama
/bye
ollama list
ollama show <modelname>
# Gives information about the model: architecture; parameters; context length; embedding length; quantization