Artificial Intelligence – Keywords

Under Edit

Artificial Intelligence (AI)

Machine Learning (ML)

Deep Learning (DL)

Natural Language Processing (NLP)

Computer Vision (CV)

Artificial Neural Network (ANN)

AI Ethics

Explainable AI (XAI)

Artificial General Intelligence (AGI)

Vibe Coding

Model Control Protocol

Supervised Learning: Training a model on labeled data.

Unsupervised Learning: Training on data without labels to find patterns.

Reinforcement Learning (RL): Training agents to take actions in an environment to maximize cumulative reward.

Classification: Predicting a discrete label (e.g., spam or not spam).

Regression: Predicting a continuous value (e.g., housing price).

Clustering: Grouping similar items together (e.g., customer segmentation).

Dimensionality Reduction: Reducing the number of features while retaining essential information (e.g., PCA).

GPT (Generative Pre-trained Transformer): A transformer-based LLM that can generate human-like text.

BERT (Bidirectional Encoder Representations from Transformers): NLP model designed to understand context in both directions.

CNN (Convolutional Neural Network): Primarily used for image recognition.

RNN (Recurrent Neural Network): Used for sequence data like time series or text.

TensorFlow / PyTorch: Popular deep learning frameworks.

OpenAI / Google DeepMind / Anthropic: Leading AI research labs.

Hugging Face: Platform and community for sharing AI models, especially transformers.

Generative AI: AI that creates new content (text, images, music, code). A branch of AI that creates new content (text, images, code, etc.) rather than just analyzing data.

Large Language Model (LLM): AI model trained on massive text datasets to generate or analyze language. LLM (Large Language Model) A type of AI trained on vast amounts of text to understand and generate human-like language (e.g., GPT-4).

Transformer: A deep learning architecture used in most modern NLP models. Transformer A neural network architecture used in LLMs that allows models to process language more efficiently and with context.

Prompt Engineering: Crafting inputs to get the best output from an LLM. Prompt The input or instruction given to a generative model to get a desired response. Prompt Engineering Crafting effective prompts to guide a generative model’s output accurately.

RAG (Retrieval-Augmented Generation): Combining LLMs with external data retrieval to generate accurate and grounded responses. RAG (Retrieval-Augmented Generation) Combines retrieval of external documents with generation to produce accurate responses (e.g., from company databases).

AI Ethics: Concerns around fairness, accountability, and transparency in AI systems.

Hallucination When an AI generates information that sounds plausible but is incorrect or made-up.

Explainable AI (XAI): Making AI decisions understandable by humans.

Token A piece of text (word, subword, or character) that LLMs use to process and generate responses.

Fine-tuning Customizing a pre-trained model using additional training on domain-specific data.

Zero-shot / Few-shot Learning Ability of a model to perform tasks with little (few-shot) or no (zero-shot) task-specific training.

Diffusion Models A type of generative model (used in tools like DALL·E, Stable Diffusion) to create realistic images from noise.

Multimodal AI AI that processes and generates multiple data types (text, image, audio, video) simultaneously.

Chatbot An AI system that interacts with users via natural language (often powered by LLMs).

API (Application Programming Interface) A way to connect to generative AI models like OpenAI’s GPT to use them in apps.

Guardrails Techniques and tools to ensure AI systems behave safely, ethically, and reliably.