Glossary

Get familiar with essential terminology used in Indigo.

Assistant

A Chatbot or an Assistant is a computer program designed to simulate human conversation. It can interact with users through text or voice interfaces.

In Indigo an Assistant is everything that is built inside a Workspace .

Assistants can range from simple single agent systems to complex AI-powered assistants with different flows, integrations, and agents capable of understanding context and nuance.

LLM (Large Language Model)

An LLM is a type of artificial intelligence trained on vast amounts of text data. Key characteristics include:

  • Size: These models often have billions of parameters.

  • Capabilities: They can understand and generate human-like text, translate languages, and even perform simple reasoning tasks.

  • Training: LLMs learn patterns from diverse texts, allowing them to adapt to various tasks without specific training for each.

Examples of LLMs include GPT (Generative Pre-trained Transformer) models, BERT, and T5. They form the backbone of many modern AI applications, including advanced chatbots and content generation tools.

CSAT (Customer Satisfaction)

CSAT is a key performance indicator used to measure how satisfied customers are with a product, service, or interaction. Important aspects include:

  • Measurement: Often calculated through surveys asking customers to rate their experience on a scale (e.g., 1-5 or 1-10).

  • Use cases: Can be applied to overall product satisfaction, support interactions, or specific features.

  • Importance: High CSAT scores often correlate with customer loyalty and positive word-of-mouth.

In the context of AI agents, CSAT helps evaluate how well the automated system meets user needs and expectations.

AI Agent

An AI Agent is an intelligent software entity capable of perceiving its environment, making decisions, and taking actions to achieve specific goals. In the Indigo.ai platform:

  • Functionality: AI Agents can handle conversations, answer queries, and perform tasks autonomously.

  • Configuration: Users can set up the agent's behavior, knowledge base, and interaction style through various settings.

  • Learning: Advanced AI Agents can improve their performance over time through machine learning techniques.

AI Agents serve as the core component in creating intelligent, responsive automated systems for various applications.

Variables

Variables in the context of AI systems are placeholders that can store different values. They play crucial roles:

  • Personalization: Allow responses to be tailored to individual users (e.g., inserting a user's name).

  • Context retention: Store information throughout a conversation for more natural interactions.

  • Dynamic content: Enable the system to insert up-to-date information (like current date or product prices) into responses.

Proper use of variables enhances the flexibility and relevance of AI-generated content.

Workflow

A Flow represents the designed path of a conversation or interaction between a user and an AI system. Key aspects include:

  • Structure: Flows often consist of nodes or steps that define possible user inputs and corresponding AI responses.

  • Decision points: Places where the conversation can branch based on user input or other conditions.

  • Actions: Specific operations the AI can perform at each step (e.g., accessing a database, calling an API).

Well-designed flows ensure smooth, logical progressions in conversations and help guide users towards their goals efficiently.

Prompt Engineering

Prompt Engineering is the art and science of crafting effective input text to guide an AI model's output. It involves:

  • Clarity: Writing clear, specific instructions for the AI.

  • Context: Providing necessary background information.

  • Constraints: Setting boundaries for the AI's response (e.g., word limit, tone, format).

Skilled prompt engineering can significantly improve the relevance, accuracy, and usefulness of AI-generated content.

API (Application Programming Interface)

An API is a set of protocols, routines, and tools that specify how software components should interact. In the context of AI platforms:

  • Integration: APIs allow the AI system to connect with other software, databases, or services.

  • Functionality: They can be used to send/receive data, trigger actions, or extend the AI's capabilities.

  • Standardization: APIs provide a consistent way for developers to interact with the AI system, regardless of the underlying implementation.

Well-documented APIs are crucial for building flexible, extensible AI solutions that can integrate seamlessly with existing systems.

RAG (Retrieval-Augmented Generation)

RAG is a technique that enhances AI responses by combining the power of large language models with specific, retrieved information. Key features:

  • Knowledge base: RAG systems maintain a separate database of information.

  • Retrieval: When given a query, the system first retrieves relevant information from its knowledge base.

  • Generation: The retrieved information is then used to augment the AI's response, improving accuracy and relevance.

RAG helps overcome limitations of pure language models by grounding responses in specific, up-to-date information.

Token (in LLMs)

A Token in the context of Large Language Models is a unit of text that the model processes. Understanding tokens is crucial for effective use of LLMs:

  • Definition: Tokens can be words, parts of words, or even punctuation marks.

  • Processing: LLMs break down input text into tokens for analysis and generation.

  • Limits: Many LLMs have maximum token limits for input and output, affecting the length of text they can handle.

Awareness of tokenization helps in optimizing prompts and managing the capacity of LLM-based systems.

Knowledge Base

A Knowledge Base is a structured collection of information used by AI systems to provide accurate responses. It typically includes:

  • Content: Frequently asked questions, product details, policies, or any relevant information.

  • Structure: Organized in a way that facilitates quick retrieval and relevance matching.

  • Maintenance: Regularly updated to ensure the AI has access to current, accurate information.

A well-maintained knowledge base is crucial for AI systems to provide consistent, reliable information across various interactions.

Trigger

A Trigger in AI systems is an event or condition that initiates a specific action or response. Triggers can be based on:

  • User input: Specific words, phrases, or intents expressed by the user.

  • Context: The current state of the conversation or user's history.

  • External events: Time-based conditions or updates from connected systems.

Properly configured triggers ensure that AI agents respond appropriately to different situations, enhancing the overall user experience.

Handover

Handover refers to the process of transferring a conversation from an AI agent to a human agent, or vice versa. Key aspects include:

  • Triggers: Conditions that initiate a handover (e.g., complex queries, user frustration).

  • Smooth transition: Ensuring the human agent has context from the AI conversation.

  • User experience: Maintaining a seamless interaction despite the change in responder.

Effective handover processes combine the efficiency of AI with the nuanced understanding of human agents for optimal customer service.

Token (in web chat installation)

In the context of installing a web chat, a Token is a unique identifier used for authentication and configuration. It typically:

  • Authenticates: Verifies that the chat widget is authorized to connect to the AI system.

  • Configures: May contain encoded information about chat settings or behavior.

  • Secures: Helps prevent unauthorized access or misuse of the AI system.

Proper use and protection of these tokens are crucial for maintaining the security and integrity of the AI chat system.

Prompt Chaining

Prompt Chaining is an advanced technique in AI systems where multiple prompts are linked together in a sequence to achieve more complex or nuanced outcomes. Key aspects include:

  • Sequential processing: Output from one prompt becomes input for the next, creating a chain of operations.

  • Task breakdown: Complex tasks are divided into smaller, manageable steps.

  • Specialization: Each prompt in the chain can be optimized for a specific subtask.

  • Enhanced capabilities: Enables AI systems to handle more intricate requests or generate more sophisticated outputs.

Examples of prompt chaining applications:

  1. Data analysis: One prompt extracts relevant data, the next analyzes it, and a final prompt generates a human-readable report.

  2. Content creation: A chain might involve generating an outline, expanding each section, and then editing for tone and style.

  3. Problem-solving: Breaking down a complex problem into steps, solving each step, and then combining the results.

Prompt chaining allows for more controlled and precise AI interactions, often resulting in higher quality outputs for complex tasks. However, it requires careful design to ensure smooth transitions between prompts and maintain context throughout the chain.

Last updated

Was this helpful?