Your Ultimate AI Glossary. Master AI lingo with our complete glossary – these are 500 terms you should know.
Hidden unit: In a neural network, a hidden unit is a node that is neither an input nor an output of the network. It processes information and contributes to the network's ability to learn and make predictions.
Belief-Desire-Intention (BDI) software model is a theoretical framework that simulates human reasoning and decision-making by representing agents' beliefs about the world, their desires or goals, and their intentions to act based on these beliefs and desires.
Did You Mean (DYM): A feature in search engines and other applications that suggests alternative or corrected spellings or queries based on the user's input.
Algorithmic probability refers to the measure of the likelihood of a particular string being produced by a universal Turing machine.
Mathematical optimization refers to the process of finding the best solution for a problem from all feasible solutions. It involves maximizing or minimizing a function based on certain constraints.
The Stanford Research Institute Problem Solver (STRIPS) is a language used to represent the actions and state changes in a problem-solving domain within artificial intelligence.
A fuzzy set is a mathematical concept that allows for elements to have varying degrees of membership rather than a strict binary distinction of being a member or not.
A Markov chain is a stochastic model used to describe a sequence of events in which the probability of each event depends only on the state attained in the previous event.
Abductive Logic Programming (ALP) is a form of logic programming that incorporates abductive reasoning to infer plausible explanations for observed phenomena.
Abductive reasoning is a form of logical inference that involves creating the best explanation for a set of observations or evidence.
An abstract data type is a conceptual model for data that defines its behavior and operations, but does not specify its implementation.
Abstraction is the process of removing or hiding details and complexities to focus on essential features.
Accelerating change refers to the concept of a continuous increase in the rate of technological, societal, and economic advancement, leading to a compounding effect over time.
Accuracy refers to the closeness of a measured value to a standard or known value. In the context of AI and machine learning, accuracy refers to the correctness of a model's predictions compared to the actual outcomes.
Actionable Intelligence refers to information that can be acted upon or used to make informed decisions.
Action language: An action language is a formal language used to specify and describe actions and their effects within an artificial intelligence system.
Action model learning is the process of developing and refining a model that predicts the outcomes of potential actions in a given environment.
Action selection is the process of choosing which action to take from a set of available options in a given situation.
An activation function is a mathematical function applied to the output of a neural network. It determines if and to what extent the information should be passed on to the next layer of the network.
Adaptive algorithm: A type of algorithm that is capable of adjusting its behavior or parameters based on changing situations or input data.
ANFIS is a hybrid intelligent system that combines the learning capabilities of neural networks and the human-like reasoning of fuzzy logic to create a flexible and adaptive method for inference and modeling.
Admissible heuristic: A heuristic is admissible if it never overestimates the cost to reach the goal from any given state, and the estimated cost is never greater than the actual cost.
Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human emotions.
Agent architecture refers to the underlying model or framework that organizes an AI agent's decision-making processes and behavior. It encompasses the design of the agent's structure, its interaction with the environment, and the mechanisms for processing information and making decisions.
AI accelerator: A specialized hardware designed to efficiently process the computations required for artificial intelligence tasks, such as machine learning and neural network inference.
AI-complete is a term used to describe problems or tasks that require a level of artificial intelligence that is at least as advanced as human intelligence.
An algorithm is a set of rules or instructions designed to solve a specific problem or perform a specific task, often used in computer science and mathematics.
Algorithmic efficiency refers to the measure of how well an algorithm performs in terms of time and space. It assesses the ability of an algorithm to use minimal resources in solving a problem.
Algorithmic probability refers to the measure of the likelihood of a particular string being produced by a universal Turing machine.
AlphaGo is a computer program developed by DeepMind that uses deep learning and neural network techniques to play the board game Go.
Ambient intelligence (AmI) refers to a network of intelligent and interconnected devices and environments that can provide personalized and seamless assistance to individuals in their daily activities.
Analysis of algorithms is the process of evaluating the efficiency and performance of algorithms. It involves studying the resource usage, such as time and space, required by an algorithm to solve a problem.
Analytics refers to the process of collecting, analyzing, and interpreting data to identify meaningful patterns and insights.
Anaphora: In linguistics, anaphora refers to the use of a word or phrase to refer back to a previously mentioned word or phrase.
Annotation: A note, comment, or explanation added to a text or data to provide additional information or interpretation.
Answer Set Programming (ASP) is a form of declarative programming that allows for solving complex problems by representing knowledge and reasoning about it using logical rules.
Anytime algorithm: An anytime algorithm is a type of algorithm that can provide a usable solution within any time constraint, allowing for intermediate solutions to be generated and improved upon as more time becomes available.
An application programming interface (API) is a set of rules and protocols that allows different software applications to communicate with each other. It defines the methods and data formats that the applications can use to request and exchange information.
Approximate string matching is the process of finding similar or identical strings within a set of strings, allowing for minor differences such as spelling mistakes or character substitutions.
Approximation error is the difference between the actual value and the estimated value in a calculation or measurement.
An argumentation framework is a formal representation of arguments and their relationships, used to analyze and evaluate reasoning and decision-making processes.
Artificial General Intelligence (AGI) refers to AI systems that possess general problem-solving capabilities and the ability to understand and complete a wide range of tasks, similar to human cognitive abilities.
Artificial Immune System (AIS): A computational model inspired by the human immune system that is used to solve optimization, detection, and classification problems.
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
Artificial intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. This may include learning, problem-solving, reasoning, and speech recognition.
Artificial Intelligence Markup Language (AIML) is an XML-based markup language used to create natural language processing patterns for chatbots and virtual assistants.
Artificial Neural Network (ANN): A computational model inspired by the structure and function of the human brain, consisting of interconnected nodes (neurons) that process and transmit information.
Association for the Advancement of Artificial Intelligence (AAAI): A nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.
Asymptotic computational complexity is a measure of the efficiency of an algorithm, describing how its running time or space requirements grow as the size of the input grows.
Attention Mechanism: A component of artificial neural networks that allows the model to focus on specific parts of the input data, emphasizing important information while disregarding irrelevant details.
Attributional calculus is a psychological theory that examines how individuals attribute causes to the behaviors of themselves and others.
Augmented reality (AR) is a technology that superimposes digital information, such as computer-generated images or data, onto a user's view of the real world, usually through the use of a device such as a smartphone or a headset.
Auto-classification is the process of automatically categorizing or tagging data based on predefined rules or machine learning algorithms.
Auto-complete: A feature that predicts and suggests words or phrases as a user types, based on context and previous input, aiming to speed up the typing process and enhance accuracy.
Automata theory is a branch of computer science and mathematics that studies abstract machines and their computational abilities.
Automated machine learning (AutoML) is the process of automating the selection and tuning of machine learning models, as well as feature engineering, to facilitate the creation of high-performing predictive models with minimal manual effort.
Automated planning and scheduling is the process of using algorithms and software to create efficient sequences of actions to achieve desired goals within specified constraints.
Automated reasoning is the use of computer programs to derive conclusions from given premises using logical rules.
Autonomic computing (AC) is a self-managing computing model aimed at reducing the complexity of IT systems. It enables systems to self-configure, self-optimize, self-heal, and self-protect.
Autonomous: having the ability to act independently or self-govern without external control.
An autonomous car is a vehicle capable of navigating and operating without direct human input.
Autonomous robot: A machine that is capable of performing tasks and making decisions without human intervention.
Backpropagation is a method used to calculate and update the gradients of neural network parameters during training, by propagating errors backwards through the network.
Backpropagation through time (BPTT) is a method used in recurrent neural networks (RNNs) to update the weights through time by propagating errors from the output back to the input layer.
Backward chaining is a reasoning method that starts with the goal and works backward to determine the sequence of steps needed to achieve the goal.
A bag-of-words model represents the text as a collection of words, disregarding grammar and word order.
Bag-of-words model in computer vision: A simple technique for representing an image by counting the occurrences of visual words in the image without considering the spatial arrangement of these words.
Batch normalization is a technique used to improve the training of artificial neural networks by normalizing the input of each batch during the training process. This helps to reduce internal covariate shift and improve the stability and speed of training.
Bayesian programming is a method that combines Bayesian statistics with computer programming to model and analyze uncertain outcomes and make predictions.
The bees algorithm is a nature-inspired optimization algorithm that mimics the foraging behavior of honey bees to solve complex computational problems.
Behavior informatics (BI) is a multidisciplinary field that studies human and animal behavior using computational and analytical techniques.
Behavior tree (BT): A behavior tree is a hierarchical way of structuring the decision-making process for an agent in AI, particularly in video games and robotics. It organizes the actions a system can take based on conditions and priorities.
Belief-Desire-Intention (BDI) software model is a theoretical framework that simulates human reasoning and decision-making by representing agents' beliefs about the world, their desires or goals, and their intentions to act based on these beliefs and desires.
BERT (aka Bidirectional Encoder Representation from Transformers) is a pre-trained language representation model developed by Google, designed to understand the context of words in a sentence by considering both the left and right context at the same time.
Bias refers to the systematic deviation of results or conclusions from the true state of affairs, often due to predetermined beliefs or prejudices.
The bias-variance tradeoff refers to the balance between the bias and variance of a machine learning model. It involves managing the tradeoff between the model's ability to capture the true relationship in the data (bias) and its sensitivity to variations in the training data (variance). Achieving low bias and low variance is the goal for optimal model performance.
Big data refers to large and complex data sets that are difficult to manage and process using traditional data processing applications.
Big O notation is used in computer science to describe the upper bound of an algorithm's time or space complexity in relation to its input size.
A binary tree is a data structure consisting of nodes where each node has at most two children, referred to as the left child and the right child.
A blackboard system is a problem-solving concept where multiple specialized knowledge sources work together to solve a complex problem. These knowledge sources communicate and contribute to a shared "blackboard" where they can asynchronously access and update information.
Boltzmann machine: A type of stochastic recurrent neural network that is trained to find patterns in data.
The Boolean satisfiability problem, also known as SAT, is the problem of determining if there exists an assignment of truth values to variables in a given Boolean formula that makes the formula evaluate to true.
Bounding box: A rectangular frame that is used to define the boundaries of an object or region within an image. It is commonly used in object detection and image annotation tasks in computer vision.
Brain technology refers to the various tools, devices, and techniques used to study and enhance the functionality of the brain. This can include imaging technologies, brain-computer interfaces, and neurostimulation methods.
Branching factor: The number of possible actions or choices available to an AI system at each decision point.
Brute-force search is a method of problem-solving that involves systematically trying all possible solutions until the correct one is found.
A capsule neural network (CapsNet) is a type of neural network architecture designed to better represent hierarchical relationships in the data by using groups of neurons called "capsules" to encode different properties of the input.
Case-based reasoning (CBR) is a problem-solving methodology that relies on past cases to address new problems. It involves retrieving, reusing, and adapting solutions from similar past cases to find a solution for the current problem.
Cataphora is a linguistic phenomenon where a pronoun or other grammatical element refers to a later word or phrase in a sentence.
Categorization is the process of classifying items into distinct groups based on their characteristics or attributes.
Category: A group of items or concepts that share common characteristics or properties and are grouped together for organizational or analytical purposes.
Category Trees: A hierarchical structure that organizes data or information into categories and subcategories, typically used to represent different levels of classification within a system.
Chatbot: A chatbot is an AI-powered computer program designed to conduct a conversation with users, typically over the internet.
Classification is the process of identifying and categorizing data into predefined classes or categories. It is a type of supervised machine learning algorithm used for organizing and labeling data based on specific features or characteristics.
Cloud robotics refers to a concept where robots are connected to a cloud infrastructure, allowing them to access and benefit from cloud services such as data storage, processing power, and machine learning algorithms.
Cluster analysis is the process of grouping data points or objects into clusters based on their similarities.
Cobweb is an algorithm used in machine learning for incremental, online learning of classification rules. It is often used for tasks where data arrives sequentially and must be learned from in a timely manner.
Cognitive architecture refers to the underlying framework or structure that supports various cognitive processes such as perception, learning, memory, and decision-making in a human or artificial intelligence system.
Cognitive computing refers to the use of computerized models of the human brain to simulate human thought processes, such as learning and problem-solving.
Cognitive Map: A mental representation of one's physical environment, including spatial relationships between objects or locations.
Cognitive science is the interdisciplinary study of how the mind processes and understands information.
Combinatorial optimization refers to the process of finding the best solution from a finite set of possible solutions for a problem involving a large number of variables or decisions.
A committee machine is a type of artificial neural network that combines the outputs of multiple individual networks to make a decision or prediction.
Commonsense knowledge refers to the basic knowledge and understanding of the world that is widely shared among people and used in everyday reasoning and decision-making.
Commonsense reasoning is the ability of an AI system to make inferences and conclusions based on everyday knowledge and understanding of the world.
Completions: The suggested text or content generated by an AI model to follow or complete a given input.
Composite AI: An approach to AI that combines various AI technologies and techniques to create more integrated and advanced solutions.
Computational chemistry refers to the use of computer models and algorithms to understand and predict the behavior of chemical systems.
Computational complexity theory is the study of the resources required to solve computational problems, such as time and space.
Computational creativity refers to the use of computer algorithms and systems to generate original and valuable ideas, designs, or outputs.
Computational cybernetics is the interdisciplinary study of the structure, behavior, and control of complex systems using computational models and techniques.
Computational humor refers to the use of algorithms and computer programs to generate, understand, or analyze humor. This field combines aspects of artificial intelligence, linguistics, and cognitive science to create or comprehend jokes, puns, and other forms of humor.
Computational intelligence (CI) refers to the study and development of algorithms and techniques inspired by nature or human intelligence, used to solve complex real-world problems.
Computational learning theory is a branch of computer science and mathematics that focuses on understanding the computational complexity of machine learning algorithms and their capabilities.
Computational linguistics is a field that focuses on the use of computers to process and understand natural language.
Computational Linguistics (Text Analytics, Text Mining): The interdisciplinary field that involves the use of computer algorithms to analyze and understand human language.
Computational mathematics is a field of study that focuses on using computers to solve mathematical problems and simulate mathematical processes.
Computational neuroscience is the study of how neural systems process information and is focused on developing mathematical models and computational techniques to understand the functions of the brain.
Computational number theory is the study of algorithms and computational methods for solving problems related to integers and other discrete mathematical structures.
A computational problem is a task or question that can be solved using a computer and algorithms.
Computational Semantics: Computational semantics refers to the use of mathematical and computational techniques to understand meaning in natural language. It involves developing algorithms and models to analyze and represent the meaning of words, phrases, and sentences, and to enable computers to understand and process human language.
Computational statistics is the field of study that focuses on the development and application of statistical methods using algorithms and computer programs.
Computer audition (CA) refers to the use of computers to analyze and interpret audio signals, such as speech or music. This field combines techniques from signal processing, machine learning, and audio engineering to enable computers to understand and respond to sound.
Computer-automated design (CAutoD) is the use of computer software to assist in the creation, modification, analysis, or optimization of a design.
Computer science is the study of computers and computational systems.
Computer vision is the field of study that focuses on enabling computers to interpret and understand visual information from the real world.
Concept drift refers to the phenomenon where the statistical properties of the target variable in a machine learning problem change over time, leading to a degradation in model performance.
Connectionism is a theoretical approach in cognitive science that models mental processes as interconnected networks of simple processing units.
Consistent heuristic: In artificial intelligence, a consistent heuristic is a heuristic function that satisfies the property of consistency, which means that its estimated cost to reach a goal cannot be greater than the actual cost.
Constrained Conditional Model (CCM): A type of machine learning model that incorporates certain constraints to guide the output generation process based on specific conditions.
Constraint Logic Programming: A programming paradigm that combines logic programming with constraints to solve complex problems by expressing relationships between variables as logical constraints.
Constraint programming is a problem-solving technique that involves defining a set of constraints and finding a solution that satisfies all constraints.
Constructed language: A constructed language is a language created intentionally rather than having evolved naturally. These languages are often developed for specific purposes, such as in literature, film, or as a means of international communication.
Content refers to the information, media, or material that is presented or distributed, such as text, images, videos, or audio.
Content Enrichment, or Enrichment, refers to the process of enhancing the quality, value, and relevance of digital content through additions, updates, or improvements.
A controlled vocabulary is a list of standardized terms used to index and retrieve information in a consistent manner across a specific domain or subject area.
Control theory is a branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs, and how to design control systems that can influence the behavior of those systems.
Conversational AI refers to artificial intelligence technology that enables machines to understand and respond to human language in a natural, conversational way.
Convolutional neural network: A type of deep learning algorithm designed for processing data that has a grid-like topology, such as a 2D image. It uses specialized layers to automatically learn features and patterns from the input data.
Convolutional Neural Networks (CNN): A type of deep neural network architecture commonly used for processing visual data, characterized by the use of convolutional layers for feature extraction.
Co-occurrence: The simultaneous presence or occurrence of two or more items or events. In the context of natural language processing, co-occurrence refers to the frequency with which words appear together within a certain context.
Corpus: A corpus refers to a large and structured collection of texts used for research, analysis, and training natural language processing models.
Crossover: In genetic algorithms, crossover is a genetic operator that combines genetic material from two parent solutions to produce new offspring solutions.
Custom/Domain Language Model: A language model trained specifically on a particular domain or for a specific application, tailored to understand and generate text relevant to that domain or application.
Darkforest: In the context of AI, Darkforest refers to a decentralized autonomous organization on Ethereum. It operates as a protocol for coordinating shared incentives among liquidity providers, as well as for quoting and executing trades across Ethereum-based automated market makers (AMMs).
The Dartmouth workshop was a 1956 conference where the term "artificial intelligence" was coined and the field of AI was first established.
Data augmentation refers to the technique of artificially increasing the size of a training dataset by applying modifications and transformations to the existing data samples.
Data discovery is the process of locating and identifying data within an organization's various sources and systems.
Data Drift: Data drift refers to the change in the statistical properties of data over time, which can impact the performance of machine learning models.
Data extraction is the process of retrieving specific data from various sources for further analysis and processing.
Data fusion is the process of integrating and combining multiple sources of data to achieve a more complete and accurate understanding or representation of a situation or phenomenon.
Data Ingestion is the process of collecting and importing data from various sources into a storage or processing system for analysis and use.
Data integration refers to the process of combining data from different sources into a single, unified view.
Data labelling is the process of tagging or annotating data to provide context and allow it to be used as training data for machine learning algorithms.
Datalog is a declarative logic programming language designed for querying and reasoning about databases.
Data mining is the process of discovering patterns and extracting useful information from a large amount of data.
Data scarcity refers to a situation in which there is a limited amount of available data for analysis or processing.
Data science is a multidisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data.
Data set: A collection of data, typically organized in a structured format, used for analysis, research, or other purposes.
Dataset: A collection of data that is organized and stored for easy access and analysis.
A data warehouse (DW or DWH) is a centralized repository for storing and integrating structured and/or unstructured data from various sources. It is designed to support data analysis and reporting.
The decision boundary is a line or surface that separates different classes or categories in a machine learning model. It helps the model determine which category a new input data point belongs to.
A decision support system (DSS) is an information system that helps organizations or individuals make decisions by providing relevant data, analysis, and modeling tools.
Decision theory is the study of rational and logical decision-making processes.
Decision tree learning is a machine learning method used for classification and regression tasks, where the data is split into nodes based on different decisions, ultimately leading to the prediction of an outcome.
Declarative programming is a style of computer programming where the programmer expresses the desired outcome without specifying the exact steps to achieve it. Instead, the program describes the problem to be solved and the system figures out how to solve it.
Deductive classifier: An AI system that uses logical rules and reasoning to classify or categorize data.
Deep Blue: A chess-playing computer developed by IBM, best known for defeating the reigning world chess champion Garry Kasparov in 1997.
Deep learning: A subset of machine learning that utilizes neural networks with multiple layers to understand and learn from large amounts of data.
DeepMind Technologies is a UK-based artificial intelligence company acquired by Google in 2014. The company is known for its cutting-edge research in machine learning and its development of AI systems.
Default logic is a non-monotonic logic used to reason about incomplete information. It allows for making assumptions when information is incomplete.
Description Logic (DL): An approach to knowledge representation and reasoning that uses a formal language to describe the properties and relationships of objects and concepts in a domain.
Developmental robotics (DevRob) is a field of study that focuses on how robots can learn and develop through interacting with their environment, similar to how humans and animals learn and develop.
Diagnosis is the identification of a disease or condition based on the signs and symptoms experienced by an individual.
A dialogue system is a computerized system designed to converse with a human user in a natural language format.
Did You Mean (DYM): A feature in search engines and other applications that suggests alternative or corrected spellings or queries based on the user's input.
Diffusion model: A mathematical model representing the process of how new ideas, products, or innovations spread and are adopted by a population over time.
Dimensionality reduction is the process of reducing the number of variables under consideration, while preserving as much relevant information as possible.
Disambiguation is the process of resolving ambiguity, typically in natural language processing, to determine the correct meaning or interpretation of a word or phrase.
A discrete system is a system that operates on a set of distinct, separate values or points in time, as opposed to a continuous system that operates on a continuous range of values or points in time.
Distributed artificial intelligence (DAI) refers to the use of multiple AI systems working together across different locations to achieve a common goal.
Domain knowledge refers to the expertise and understanding of specific subject matter or industry, including its concepts, practices, and challenges.
Dynamic Epistemic Logic (DEL) is a branch of modal logic that studies the changes in knowledge and beliefs of agents in multi-agent systems.
Eager learning is a machine learning approach where the model is trained on all the available training data at once.
The Ebert test is a method used to assess the presence and severity of lower extremity peripheral arterial disease (PAD) by comparing the blood pressure in the ankle to the blood pressure in the arm.
Echo state network (ESN): A type of recurrent neural network (RNN) with a fixed, randomly generated hidden layer that allows for efficient training on temporal data.
An edge model refers to an artificial intelligence model that is deployed and operates on local devices or edge computing infrastructure, rather than in a centralized cloud environment. Edge models are designed to process and analyze data closer to the source, reducing latency and enhancing privacy and security.
Embedding: In the context of artificial intelligence, embedding refers to the representation of data in a lower-dimensional space, typically achieved through techniques such as word embeddings or feature embeddings. These embeddings capture semantic relationships between the data points, enabling more efficient processing and analysis.
An embodied agent is an artificial intelligence system that interacts with its environment and other agents using a physical body or physical representation.
Embodied cognitive science is a theoretical approach that views cognition as inherently connected to the body and the environment in which it is situated. This approach emphasizes the importance of physical and sensory experiences in shaping cognitive processes and behavior.
Emotion AI, also known as Affective Computing, refers to the development of artificial intelligence systems that can detect, interpret, and respond to human emotions. These AI systems use various methods, such as facial recognition, voice analysis, and biometric data, to gauge emotional states.
Ensemble averaging is a method of combining multiple models or data sets to produce a single output that is more accurate or reliable than any individual model or data set.
Entity: A thing with distinct and independent existence, often referring to an object or concept represented in a database or information system.
Entity annotation is the process of identifying and labeling entities within unstructured data, such as text or speech, to enable the extraction and analysis of specific information.
Entity Extraction: The process of identifying and isolating specific pieces of data, such as names, organizations, or locations, from unstructured text.
Environmental, Social, and Governance (ESG) - A framework for evaluating a company's ethical impact and sustainability practices, including its environmental responsibility, social policies, and corporate governance.
Epoch (Machine Learning): In machine learning, an epoch refers to one complete cycle through the entire dataset during the training of a neural network.
Error-driven learning is a process in which an AI system updates its internal model based on the difference between its predicted outputs and the true outcomes, with the goal of minimizing the error.
Ethics of Artificial Intelligence: The study and evaluation of the moral implications and considerations related to the development, implementation, and use of artificial intelligence technologies.
ETL, which stands for Extract, Transform, Load, is a process used to extract data from various sources, transform it into a consistent format, and then load it into a target database or data warehouse.
Evolutionary Algorithm (EA): A class of algorithms inspired by the process of natural selection to solve optimization and search problems by means of selection, recombination, and mutation.
Evolutionary computation refers to a family of computational algorithms inspired by the process of biological evolution. These algorithms are used to find solutions to optimization and search problems by mimicking the principles of natural selection and genetics.
Evolving Classification Function (ECF): A function that adapts and evolves over time to classify input data into different categories based on changing conditions and feedback.
Existential risk refers to a potential event or scenario that could lead to the extinction of humanity or cause significant and irreversible damage to human civilization.
An expert system is a computer system that emulates the decision-making ability of a human expert in a specific domain by using logical rules and knowledge representation.
Explainable AI/Explainability: The ability of an AI system to provide understandable explanations for its decisions and outputs.
Extraction: The process of identifying and capturing specific information or data from a source. Keyphrase Extraction: The task of automatically identifying key terms or phrases from a text that best represent its content.
Fast-and-frugal trees are a type of decision trees that use simple and quick rules to make decisions, often used in cognitive science and artificial intelligence.
Feature extraction is the process of transforming raw data into a set of features that are more meaningful and representative for a specific task or problem.
Feature learning is the process of automatically identifying and extracting meaningful patterns or features from raw data.
Feature selection is the process of choosing a subset of relevant features to use in model training and prediction, while excluding irrelevant or redundant features.
Federated learning is a machine learning approach where multiple devices or servers collaboratively train a shared model while keeping their data localized.
Few-shot learning is a machine learning approach where a model is trained with only a small amount of labeled training data.
A fine-tuned model is a pre-trained machine learning model that has been further trained with a specific dataset in order to improve its performance on a particular task.
Fine-tuning: The process of adjusting and optimizing pre-trained machine learning models to improve their performance for specific tasks or datasets.
Fluent: In the context of artificial intelligence, fluent refers to a state or condition within a system where propositions or variables are considered to be true.
Formal language: A set of strings of symbols defined by a specific grammar and rules, often used in mathematics, computer science, and logic.
Forward chaining is a reasoning process that starts with available data and uses rules to derive new conclusions.
Foundational model: A foundational model is a fundamental framework or structure used as the basis for developing more complex models or systems. It serves as a starting point or foundation upon which other models or systems can be built.
Frame: A data structure used to represent knowledge in the form of a set of interconnected concepts. Frames organize information into a hierarchical structure, with each concept containing attributes and relationships to other concepts.
Frame language refers to a method for representing knowledge in artificial intelligence, using a structure composed of frames, slots, and fillers. Each frame contains a set of slots representing attributes, and the fillers are the specific values assigned to those attributes.
The frame problem refers to the difficulty in determining which information is relevant and should be considered in a given situation, particularly in the context of artificial intelligence and decision-making systems.
Friendly artificial intelligence: A concept referring to AI systems that are programmed to prioritize the well-being of humans and act in a benevolent and cooperative manner.
F-score (F-measure, F1 measure): A measure of a test's accuracy that considers both precision and recall, calculated using the harmonic mean of precision and recall.
Futures studies is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them.
A fuzzy control system is a type of control system that uses linguistic variables and fuzzy logic to handle imprecise and uncertain information for decision making.
Fuzzy logic is a computational approach that allows for imprecision in reasoning and decision making by representing and manipulating uncertainty and vagueness.
Fuzzy rule: A conditional statement that expresses a relationship between input variables and output variables using fuzzy logic to handle uncertainty and imprecision.
A fuzzy set is a mathematical concept that allows for elements to have varying degrees of membership rather than a strict binary distinction of being a member or not.
Game theory is the study of mathematical models of strategic interaction between rational decision-makers.
General AI: Refers to a form of artificial intelligence that possesses human-like cognitive abilities and is capable of performing a wide range of tasks typically requiring human intelligence.
General game playing (GGP) is the field of artificial intelligence that focuses on creating agents capable of playing a wide variety of games effectively, without human intervention or game-specific programming.
Generalized model: A model that is capable of solving a wide range of tasks or problems, as opposed to being limited to a specific task or problem.
Generative Adversarial Network (GAN): A type of machine learning framework consisting of two neural networks — the generator and the discriminator — designed to generate realistic data, such as images or text. The generator creates synthetic data, while the discriminator assesses its authenticity, leading to continuous improvement in generating more realistic outputs.
Generative AI (GenAI) refers to a type of artificial intelligence that is capable of creating new content, such as images, text, or music, based on patterns and examples it has learned from.
Generative artificial intelligence refers to AI systems that can produce new data, text, images, or other outputs based on patterns and examples they have been trained on.
Genetic Algorithm (GA): A search-based optimization technique inspired by natural selection and genetics. It works by evolving a population of potential solutions through the use of genetic operators such as selection, crossover, and mutation.
A genetic operator is a function used in genetic algorithms to alter the genetic material of individuals in a population, such as through mutation or crossover.
Glowworm Swarm Optimization: It is a population-based metaheuristic algorithm that is inspired by the social foraging behavior of glowworms. It is used for solving optimization problems.
Graph (abstract data type): A graph is a data structure that consists of a set of nodes (vertices) and a set of edges that connect pairs of nodes. It is used to represent relationships between entities.
Graph database (GDB) is a type of database that uses graph structures for data storage and query. It is designed for representing complex relationships between data entities.
A graph in discrete mathematics is a collection of nodes (vertices) along with a set of edges that connect pairs of nodes.
Graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects.
Graph traversal refers to the process of visiting and exploring all the nodes of a graph data structure.
Grounding is the process of connecting AI models or systems to real-world data or experiences in order to improve their accuracy and usefulness.
Hallucinations are perceptions that occur without any external stimulus. They can involve seeing, hearing, feeling, or smelling things that are not actually present.
The halting problem is a classic problem in computer science that asks whether an algorithm can determine if another algorithm, when given certain inputs, will terminate or run forever. It is proven to be undecidable in general.
Heuristic: A problem-solving approach that uses practical methods and rules of thumb to find solutions, rather than exhaustive computation.
Hidden Layer: In a neural network, a layer of nodes between the input and output layers where the computational transformations take place. The nodes in the hidden layer are not directly accessible from the input or output.
Hidden unit: In a neural network, a hidden unit is a node that is neither an input nor an output of the network. It processes information and contributes to the network's ability to learn and make predictions.
Hybrid AI refers to a combination of different AI techniques, such as symbolic AI and machine learning, in order to solve complex problems and make more accurate predictions.
A hyper-heuristic is a problem-solving approach that aims to create or select heuristics, combining them in order to efficiently and effectively solve complex computational problems.
Hyperparameter: A setting or configuration that is external to the model and is not learned from the data during training. It is used to control the learning process and affect the performance of the model.
Hyperparameters: Hyperparameters are the parameters of a machine learning model that are set prior to the training process and are not learned from the data.
The IEEE Computational Intelligence Society is a professional society that focuses on the study of computational intelligence, which encompasses the research and development of biologically and linguistically motivated computational paradigms.
Incremental learning is a machine learning approach where a model is continually updated and improved as new data becomes available, without retraining the entire model from scratch.
An inference engine is a component of an artificial intelligence system that processes information and makes logical deductions and inferences based on predefined rules and data.
Information Integration (II): The process of combining, synthesizing, and presenting data from multiple sources to provide a unified view of information.
Information Processing Language (IPL): A high-level programming language and a family of computer programming languages.
Insight Engines: Enterprise search technologies that enable users to ask natural language questions and receive actionable insights from diverse structured and unstructured data sources.
Intelligence amplification (IA) refers to the use of technology to enhance human cognitive abilities.
Intelligence explosion: The hypothetical event in which artificial intelligence recursively self-improves, leading to an accelerating increase in its cognitive abilities.
Intelligent Agent (IA): A software program that can autonomously perform tasks or make decisions by analyzing and processing information from its environment.
Intelligent control: a type of control system that utilizes artificial intelligence to make decisions and take actions in response to changing conditions.
Intelligent Document Processing (IDP) or Intelligent Document Extraction and Processing (IDEP) is a technology that uses artificial intelligence and machine learning to extract and process data from documents such as invoices, forms, and contracts.
Intelligent Personal Assistant: An artificial intelligence system designed to assist and anticipate the needs of individual users through natural language processing and contextual understanding.
Intent: In the context of artificial intelligence and natural language processing, intent refers to the goal or purpose underlying a user's input or request in a conversation with a chatbot or virtual assistant. Understanding user intent is fundamental to providing accurate and relevant responses.
Interpretation refers to the process of analyzing or explaining the meaning of data or information. In the context of artificial intelligence, interpretation can involve understanding and making sense of the output or results generated by AI systems.
Intrinsic motivation refers to the internal drive and satisfaction derived from engaging in an activity for its own sake, without relying on external rewards or incentives.
An issue tree is a visual tool used to break down complex problems into smaller, more manageable components. It helps in organizing and prioritizing potential solutions to the problem.
The junction tree algorithm is a method used in probabilistic graphical models to efficiently perform inference and compute marginal probabilities. It involves constructing a tree structure that captures the dependencies and relationships between variables in the model.
Kernel method: A technique in machine learning that maps data points into a higher-dimensional space to make them more separable for classification or regression tasks.
KL-ONE is a knowledge representation language for artificial intelligence that uses a formal system for representing concepts and relationships.
Knowledge acquisition is the process of collecting and obtaining information or expertise to enhance understanding and problem-solving capabilities.
A knowledge-based system (KBS) is a type of artificial intelligence system that uses a database of knowledge to reason and make decisions.
Knowledge Engineering: The process of designing, developing, and organizing knowledge-based systems that can perform complex tasks typically associated with human intelligence.
Knowledge engineering (KE) involves creating, capturing, organizing, and representing knowledge for use in AI systems. It includes methods and techniques for acquiring and structuring knowledge so that it can be effectively used by AI applications.
Knowledge extraction is the process of identifying, capturing, and organizing knowledge from various sources for use in AI systems.
A knowledge graph is a structured data model that represents knowledge as sets of entities and relationships between them.
Knowledge Graph: A knowledge graph is a data structure that represents knowledge as a network of interconnected entities and their attributes, relationships, and classifications. It is used to organize and link information for machine understanding and retrieval.
Knowledge Interchange Format (KIF) is a computer-oriented language for the interchange of knowledge among disparate systems.
A knowledge model is a representation of knowledge about a particular domain or subject, often using structured formats to facilitate computer processing.
Knowledge Representation and Reasoning (KR² or KR&R) refers to the methodology and techniques used to represent information and knowledge in a computer system, as well as the processes used to manipulate and derive new knowledge from the representations.
A label is a descriptive tag or identifier that is attached to data in order to categorize, classify, or organize it.
Labelled Data: Data that has been tagged with one or more labels, providing context or meaning to the information.
LangOps (Language Operations) - The practice of managing and optimizing the use of human language in AI systems and applications, including tasks such as language detection, translation, and language model training and deployment.
Language data refers to any type of information or content that is related to a specific language, such as text, speech, or linguistic patterns. It is used to train and improve natural language processing systems and language-based AI models.
Large Language Models (LLM) are artificial intelligence systems that are trained on massive amounts of text data and are capable of understanding and generating human-like language. These models have a wide range of applications in natural language processing tasks such as translation, summarization, and conversation.
Lazy learning is a machine learning method that postpones the computation of the hypothesis function until a query is made.
Lemma: A lemma is the base form of a word, typically used to represent a group of words that are inflected forms of the same root word.
Lexicon: A lexicon refers to the vocabulary of a language or a system of symbols. It encompasses all the words and terms that are used and understood within a specific domain or context.
Linguistic annotation involves adding descriptive information to a text, such as part-of-speech tags, syntactic structures, and named entities.
Linked Data refers to a method of publishing structured data so that it can be interrelated and linked with other data on the web. It employs specific standards and technologies to enable better integration and sharing of information.
Lisp (LISP): A high-level programming language known for its powerful features and distinctive syntax using parentheses.
Logic programming is a programming paradigm that uses mathematical logic for programming goals and constraints.
Long Short-Term Memory (LSTM): A type of recurrent neural network architecture designed to efficiently capture and retain long-term dependencies and patterns in sequential data.
Machine intelligence refers to the ability of a computer or machine to perform tasks that typically require human-like intelligence, such as understanding language, making decisions, and learning from experience.
Machine learning is a branch of artificial intelligence that focuses on developing algorithms and techniques to enable computers to learn from and make decisions based on data.
Machine Learning (ML): A field of artificial intelligence that focuses on developing algorithms and statistical models to enable computer systems to learn and make decisions based on data without explicit programming.
Machine listening refers to the computational process of analyzing and understanding audio data, often with the goal of recognizing and interpreting sounds and speech.
Machine perception refers to the ability of a machine to interpret and understand sensory data from its environment, similar to how humans perceive and interpret sensory information.
Machine translation: The process of using software to automatically translate text or speech from one language to another.
Machine vision (MV): The technology and methods used for acquiring and processing images to understand, interpret, and make decisions based on visual data, often implemented in automated systems and industrial applications.
A Markov chain is a stochastic model used to describe a sequence of events in which the probability of each event depends only on the state attained in the previous event.
Markov Decision Process (MDP): A mathematical framework used to model decision-making processes in which the outcomes of decisions are uncertain and partially random.
Mathematical optimization refers to the process of finding the best solution for a problem from all feasible solutions. It involves maximizing or minimizing a function based on certain constraints.
Mechanism design is the process of designing rules and incentives to achieve desired outcomes in situations with self-interested individuals.
Mechatronics is a field of engineering that combines mechanical and electrical systems with computer science and control engineering to design and create automated systems.
Metabolic network reconstruction: The process of compiling information about all the biochemical reactions that occur in an organism to create a comprehensive map of its metabolic pathways. Metabolic network simulation: The use of mathematical models and algorithms to predict how a metabolic network will behave under different conditions, such as changes in nutrition or genetic mutations.
Metacontext: The broader context within which a specific context or set of contexts are situated. Metaprompt: A prompt or instruction designed to generate responses or actions within a higher-level or overarching context.
Metadata is data that provides information about other data. It describes the attributes of the data and helps organize, locate, and understand it.
Metaheuristic: A higher-level strategy or algorithm designed to find solutions to optimization problems, often by repeatedly exploring and exploiting the search space to improve solutions.
A model is a simplified representation of a system, process, or concept used to make predictions or understand behavior. In the context of AI, a model is often a mathematical or computational structure designed to perform a specific task such as classification, prediction, or optimization.
Model checking is a formal verification technique used to automatically check if a system satisfies a given property.
Model Drift: Model drift refers to the degradation of a machine learning model's performance over time due to changes in the underlying data distribution.
Model Parameter: Model parameter refers to the internal variables of a model that are learned from training data and used to make predictions or perform tasks. Model parameters are adjusted during the training process to minimize errors or optimize performance.
Modus ponens is a form of deductive reasoning where a conditional statement and the affirmation of the antecedent lead to the affirmation of the consequent.
Modus tollens: A valid form of argument that uses the rule of inference "if P then Q," along with the premise "not Q," to deduce "not P."
Monte Carlo tree search: Monte Carlo tree search is a decision-making algorithm that uses random sampling to evaluate the potential outcomes of different moves in a game or decision-making process. It is commonly used in artificial intelligence for decision-making in games and other complex scenarios.
Morphological Analysis: The process of analyzing the structure and forms of words in a language to study their meanings and relationships.
Multi-agent system (MAS): A system composed of multiple autonomous agents, each capable of interacting with each other and the environment to achieve individual and/or collective goals.
Multimodal models: AI models that are designed to process and interpret information from multiple types of data sources or modalities, such as text, images, and sound. Modalities: Different types of data sources, such as text, images, audio, and video, that can be used as input for AI systems.
Multi-swarm optimization is a metaheuristic algorithm that utilizes multiple swarms of solutions to explore the search space in parallel.
Multitask prompt tuning (MPT) refers to the process of optimizing a language model's prompts for multiple tasks to improve overall performance.
Mutation: A mutation refers to a random change in an organism's DNA sequence, leading to the introduction of new genetic variation.
Mycin: A rule-based expert system developed in the early 1970s as one of the first examples of AI applications in the medical domain.
Naive Bayes classifier: A simple probabilistic classifier based on applying Bayes' theorem with strong independence assumptions between the features.
Naive semantics is an approach to natural language processing where the meaning of words and sentences is interpreted based on their literal, surface-level interpretation, without taking into account context, pragmatics, or deeper levels of meaning.
Name binding is the association of an identifier with an entity, such as a variable, within a program.
Named-entity recognition (NER) is a natural language processing task that involves identifying and classifying named entities, such as names of people, organizations, locations, dates, and more, within unstructured text.
A named graph is a collection of RDF (Resource Description Framework) triples that are given a name or identifier. This allows for easier reference and management of specific sets of triples within a larger RDF dataset.
Natural language generation (NLG) is a subfield of artificial intelligence (AI) that focuses on the automatic generation of natural language text from structured data.
Natural Language Processing (NLP) is a field of artificial intelligence focused on enabling computers to understand, interpret, and manipulate human language.
Natural language processing (NLP) is a field of artificial intelligence that focuses on the interaction between computers and human language. NLP aims to help computers understand, interpret, and generate human language in a valuable way.
Natural language programming is the use of human language to interact with and command computer programs.
Natural Language Understanding (NLU) is the ability of a computer program to comprehend and interpret human language in a way that is meaningful and useful.
Natural language understanding (NLU) refers to the ability of an artificial intelligence system to comprehend and interpret human language in a meaningful way.
A network motif is a recurring and statistically significant sub-graph pattern within complex networks.
Neural machine translation (NMT) is a machine translation approach that uses artificial neural networks to translate text from one language to another.
Neural network: A neural network is a computational model inspired by the structure and functioning of the human brain, composed of interconnected units called neurons that work together to process and analyze complex data.
Neural Turing machine (NTM): A type of artificial neural network that augments the capabilities of traditional models by incorporating an external memory bank and differentiable read-write operations. This enables NTMs to perform complex tasks that require extensive memory storage and retrieval.
Neurocybernetics is the study of the relationship between the nervous system and cybernetics, focusing on the control and communication in biological and artificial systems.
Neuro-fuzzy: A combination of neural networks and fuzzy logic, which uses both structured and human-like reasoning to model complex systems.
Neuromorphic engineering is a branch of engineering that involves designing and building artificial neural systems, networks, and devices that mimic the structure and behavior of the human brain.
NLG (Natural Language Generation): The process of generating human-like language from structured data or information using algorithms and machine learning techniques.
NLQ (Natural Language Query): A process in which a user interacts with a computer system using natural language to ask questions or request information.
NLT (Natural Language Technology) refers to the field of computer science that focuses on the interaction between computers and humans through natural language. It encompasses various technologies that enable machines to understand, interpret, and generate human language.
Node: In the context of AI and machine learning, a node is a processing unit within a neural network that receives input, processes it, and produces an output.
Nondeterministic algorithm: A type of algorithm where the output is not fully predictable based on the input. Instead, the algorithm can produce different results for the same input.
Nouvelle AI refers to the emerging field of artificial intelligence that focuses on the integration of human-like characteristics and cognitive abilities into AI systems.
NP stands for "noun phrase." It is a linguistic term for a phrase in a sentence that functions as a noun, often consisting of a noun and other words that modify or describe it.
NP-completeness: A problem is NP-complete if it is in the set of NP problems and any problem in NP can be reduced to it in polynomial time.
NP-hardness refers to a category of computational problems that are at least as hard as the hardest problems in the complexity class NP.
Occam's razor: The principle that states that among competing hypotheses, the one with the fewest assumptions should be selected.
Offline learning: A technique in machine learning where the model is trained on a static dataset without requiring real-time data input during training.
Online machine learning: A type of machine learning where the model is updated continuously as new data becomes available, allowing for real-time adaptation to changes in the input data.
Ontology is the study of the nature of existence and being, typically used in the context of information science and artificial intelligence to define the relationships between entities and concepts.
Ontology learning is the automatic extraction of knowledge from structured or unstructured sources to create or extend ontologies.
OpenAI is an artificial intelligence research lab and company that aims to ensure advanced AI technology benefits all of humanity.
OpenCog is a software platform for building and sharing intelligent agents.
Open Mind: Willingness to consider new ideas, perspectives, and information without being prejudiced or closed off. Common Sense: Practical judgment based on experience and reasoning, enabling one to make sound and rational decisions.
Open-source software (OSS) is a type of computer software in which the source code is made available to the public, and can be modified and distributed by anyone.
Overfitting occurs when a machine learning model performs well on the training data but poorly on new, unseen data due to capturing noise and random fluctuations in the training data.
Parameter: A parameter is a variable that is used to define a particular aspect of a system or model. In the context of artificial intelligence, parameters are values that can be adjusted to control the behavior or output of an algorithm or model.
Parameters: In the context of AI and machine learning, parameters are the internal variables whose values are learned from the data during the training process of a model. They are used to make predictions or produce outputs based on the input data.
Parsing is the process of analyzing a string of symbols to determine its grammatical structure.
Partially Observable Markov Decision Process (POMDP): A model used in decision making under uncertainty where the state of the system is only partially observable.
Partial order reduction: A technique used in formal verification to reduce the number of possible interleavings of concurrent processes, while preserving the properties being verified.
Particle swarm optimization (PSO) is a computational optimization technique inspired by the behavior of bird flocks and fish schools. It uses a population of candidate solutions, represented as particles, that move through the search space to find the optimal solution.
Part-of-Speech Tagging: The process of assigning a grammatical category (such as noun, verb, adjective) to each word in a given text.
Pathfinding is the process of finding the most efficient route between a starting point and a destination, typically used in the context of AI and robotics.
Pattern recognition is the process of identifying patterns or regularities in data. It involves recognizing similarities or differences between inputs and classifying them into categories or groups.
PEMT, or Post Edit Machine Translation, refers to the process of editing machine-generated translations to improve their accuracy and fluency.
Plugins: Software components that can be added to a program to extend its functionality or add new features.
Post-processing refers to the manipulation and enhancement of a digital image after it has been captured or produced.
Precision is a measure of the accuracy and exactness of a system's output, indicating how close the results are to each other when the system is repeated under similar conditions.
Predicate logic is a formal system that uses predicates (or properties) and quantifiers to express relationships and make inferences in a structured way.
Predictive analytics involves the use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data.
Pre-processing refers to the initial steps taken to convert raw data into a format suitable for further analysis or modeling. This may involve tasks such as cleaning, transforming, and aggregating the data.
A pretrained model is a machine learning model that has been trained on a large dataset and then distributed for use in solving similar tasks.
Pretraining is the process of training a model on a large dataset to learn general features before fine-tuning on a smaller, task-specific dataset.
Principal Component Analysis (PCA) is a statistical method used to simplify data by reducing the number of variables while preserving important information. It identifies the most significant patterns and trends within the data.
Principle of rationality refers to the concept that individuals will make decisions that maximize their own self-interest or utility.
Probabilistic programming (PP) is a programming paradigm that enables the creation of models for probabilistic inference and reasoning using programming languages.
A production system is a computer program or set of rules that represents and processes knowledge in order to make decisions or perform tasks.
A programming language is a formal language used to give instructions to a computer or other programmable devices to perform specific tasks.
Prolog is a programming language that is widely used for artificial intelligence applications, particularly in the area of natural language processing and expert systems. It is based on a formal system of logic known as first-order logic.
Prompt: A prompt is a cue or instruction that initiates a specific action or response. It is used to guide or elicit a desired behavior or input from a user or system.
Prompt chaining refers to the method of using consecutive prompts to guide a conversation or interaction. It involves presenting follow-up prompts based on the previous response to further the conversation or achieve a specific goal.
Prompt engineering involves crafting clear, precise, and effective instructions or cues for AI systems to influence their behavior or output.
Propositional calculus is a formal system for reasoning about the truth or falsity of logical statements, known as propositions, using logical operators such as AND, OR, and NOT.
Python: Python is a high-level programming language known for its simplicity and readability. It is commonly used for web development, data analysis, artificial intelligence, and scientific computing.
The qualification problem refers to the challenge of determining what knowledge or qualifications an AI system needs in order to perform tasks effectively in a given environment.
Quantifier: In logic and mathematics, a quantifier is a grammatical element used to indicate the scope of a variable and to specify the quantity of objects in a given domain. It can be used to express universal or existential statements. For example, "for all" and "there exists" are common quantifiers in logical expressions.
Quantum computing: A computing paradigm that utilizes the principles of quantum mechanics to perform operations on data using quantum bits, or qubits, which can exist in multiple states simultaneously, enabling a vastly increased processing capacity compared to classical computing.
Query language is a computer programming language used to retrieve and manipulate data from a database.
A radial basis function network is a type of artificial neural network that uses radial basis functions as activation functions in its hidden layers. These functions are centered at different points in the input space and are typically used for interpolation, classification, or function approximation tasks.
Random Forest: Random Forest is a machine learning algorithm that uses an ensemble (combination) of decision trees to make predictions. It creates multiple trees and merges their predictions to improve accuracy and reduce overfitting.
A reasoning system is a component of artificial intelligence that processes information and derives conclusions based on logic and established rules.
Recall is a measure of the completeness of a retrieval of relevant information from a larger set of data. It is the ratio of the number of relevant items retrieved to the total number of relevant items.
A recurrent neural network (RNN) is a type of artificial neural network designed to process sequential data by using feedback loops. This allows it to consider previous inputs when making predictions or classifications.
Recurrent Neural Networks (RNN) are a type of artificial neural network designed to effectively process sequences of data by maintaining memory of past inputs.
The region connection calculus is a mathematical framework used for modeling spatial relationships and reasoning about regions and their connections in a formal way.
Reinforcement learning is a type of machine learning where an agent learns to make decisions by taking actions in an environment to maximize a reward signal.
Reinforcement learning (RL) is a type of machine learning where an agent learns to make decisions by interacting with an environment and receiving rewards or penalties for its actions.
Reinforcement learning with human feedback (RLHF) is a machine learning approach that combines reinforcement learning with input from human observers to train models. It enables machines to learn from human guidance in addition to rewards and penalties.
Relations: In the context of artificial intelligence and database management, relations refer to the connections or associations between different data elements within a database. This term is commonly used in the context of relational databases, where data is stored and organized using tables with predefined relationships between them.
Reservoir computing is a type of machine learning method that uses a fixed, random "reservoir" of computing units to perform complex tasks such as pattern recognition and time-series prediction.
Resource Description Framework (RDF) is a standard for representing and exchanging data on the web. It provides a way to describe resources using simple statements, known as triples, consisting of subject, predicate, and object.
Responsible AI refers to the ethical and accountable development, deployment, and use of artificial intelligence technologies, taking into consideration the potential impact on society and the environment. It involves ensuring fairness, transparency, privacy, and security in AI systems.
A type of neural network used for feature learning and training feature detectors.
The Rete algorithm is a pattern matching algorithm used in the implementation of rule-based systems in artificial intelligence. It efficiently matches patterns against a given set of data.
Robotics is the branch of technology that deals with the design, construction, operation, and use of robots.
R programming language is a free software environment for statistical computing and graphics.
A rule-based system is a computer system that uses a set of explicitly defined rules to make decisions and solve problems.
Rules-based Machine Translation (RBMT) is a machine translation approach that relies on linguistic rules and grammatical structures to translate text from one language to another.
SAO (Subject-Action-Object): A fundamental structure in natural language processing that represents the relationships between an entity (subject), an action, and another entity (object).
Satisfiability refers to the property of a logical formula being capable of being simultaneously true for a specific set of values assigned to its variables.
A search algorithm is a step-by-step process used to find a specific piece of information or solve a problem within a defined set of data.
Selection: The process of choosing a subset of data or objects based on specific criteria.
Selective Linear Definite clause resolution (SLD-resolution) is a method used for proving logic formulas in artificial intelligence and automated reasoning. It is a specific inference rule used in logic programming.
Self-management is the ability of an AI system to regulate and control its own functioning and decision-making processes without human intervention.
Self-supervised learning is a machine learning technique where a model learns from the input data itself, without requiring explicit supervision or labeled data.
Semantic annotation refers to the process of adding metadata or tags to digital content in order to extract and convey the meaning of the content.
A semantic network is a way of representing knowledge or concepts as interconnected nodes and links.
A semantic query is a search that uses natural language processing and understands the meaning and context of words in order to retrieve relevant information from a database or search engine.
A semantic reasoner is a software tool that can infer and deduce new information based on the meaning and relationships of data and concepts.
Semantics refers to the meaning or interpretation of a word, phrase, or language. It involves understanding the relationships between words and the broader context in which they are used.
Semantic search is a search technique that uses the meaning and context of the words in a query to find relevant results, rather than simply matching keywords.
Semi-structured data is a type of data that does not conform to the structure of traditional relational databases, but has some organizational properties that make it more accessible than unstructured data.
Sensor fusion refers to the process of combining data from multiple sensors to create a more complete and accurate understanding of a given environment or situation.
Sentiment: The emotional tone or attitude expressed in a piece of text, often used to gauge the overall positive, negative, or neutral feeling conveyed by language.
Sentiment analysis is the process of using natural language processing and machine learning techniques to determine the sentiment or emotion expressed in text data, such as positive, negative, or neutral.
Separation logic is a formal system used to reason about the behavior of programs that manipulate memory allocation and deallocation. It allows separate reasoning about different parts of the program's memory.
Similarity is a measure of how alike two things are. It is often used to compare objects in data analysis and machine learning. Correlation is a statistical measure that indicates the extent to which two variables change together. It provides insight into the relationship between variables.
Similarity learning is a type of machine learning that focuses on training models to understand and measure the similarities between data points.
Simple Knowledge Organization System (SKOS) is a W3C recommendation designed for expressing concept schemes, including thesauri, taxonomies, and classification schemes, in a machine-readable format.
Simulated annealing (SA) is a probabilistic optimization technique used to find near-optimal solutions to complex combinatorial optimization problems. It is inspired by the process of annealing in metallurgy.
Situated approach: An approach to artificial intelligence that emphasizes the importance of context and environment in understanding and solving problems. It considers the impact of real-world situations on the behavior and decision-making of intelligent systems.
Situation calculus is a formalism for representing and reasoning about dynamic worlds and actions within the field of artificial intelligence and logic.
Software is a collection of instructions and data that tells a computer how to perform tasks and functions.
Software engineering is the systematic application of engineering approaches to the development, operation, and maintenance of software.
SPARQL: a query language and protocol used for querying and manipulating data stored in RDF format.
Spatial-temporal reasoning refers to the ability to understand and manipulate objects and events in both space and time. It involves reasoning about the relationships and interactions between objects and events based on their positions and changes over time.
Specialized corpora are collections of language data that have been specifically selected and categorized for use in a particular field or subject area, such as medicine, law, or finance. These corpora are utilized for purposes like language analysis and development of domain-specific AI models.
Speech analytics refers to the process of analyzing spoken language in order to gain insights and information.
Speech recognition is the ability of a machine to interpret and understand spoken language, converting it into text or commands.
Spiking neural network (SNN): A type of artificial neural network that closely models the behavior of biological neural networks by using discrete pulses or "spikes" for communication between nodes.
The Stanford Research Institute Problem Solver (STRIPS) is a language used to represent the actions and state changes in a problem-solving domain within artificial intelligence.
State: A representation of the current situation or condition of a system at a specific point in time. In the context of artificial intelligence, a state refers to the current configuration or set of variables that defines the condition of a system or agent.
Statistical classification is a method in which data points are categorized into different classes or groups based on their attributes and statistical properties.
Statistical Relational Learning (SRL) is a subfield of machine learning that integrates statistical methods with relational databases and logic programming to reason about complex, structured data.
Stochastic optimization (SO) refers to the use of probabilistic methods in the optimization of complex systems or processes. This approach accounts for uncertainty and randomness in the system being optimized.
Stochastic semantic analysis is a method of analyzing language that uses probability and randomness to understand the meaning of words and phrases within a given context.
Strong AI refers to artificial intelligence systems that possess human-like cognitive abilities, such as understanding, reasoning, and problem-solving.
Structured data refers to data that is organized and formatted in a specific, predefined way, making it easily accessible and understandable by both people and computer systems.
Subject-matter expert (SME): An individual who possesses deep knowledge and expertise in a specific area or field. SMEs are often consulted to provide insights, guidance, and advice within their domain of expertise.
Superintelligence refers to an intelligence that surpasses human intelligence in all aspects, including problem-solving, creativity, and social skills.
Supervised learning is a type of machine learning algorithm where the model is trained on labeled data, and the goal is to learn a mapping from input to output based on the input-output pairs provided during training.
Support-vector machines are a type of supervised learning algorithm used for classification and regression analysis. They work by finding the optimal hyperplane that separates data points into different classes.
Swarm Intelligence (SI) is a computational method that models the collective behavior of decentralized self-organized systems, such as the behavior of a group of insects or animals.
Symbolic artificial intelligence is a branch of AI that involves representing knowledge and problem-solving using symbols and rules.
Symbolic Methodology: An approach to problem-solving in artificial intelligence that involves the manipulation and processing of symbols and abstract concepts to represent knowledge and perform reasoning tasks.
Syntax refers to the structure or rules that govern the arrangement of words and symbols to form well-formed sentences or code in a programming language.
Synthetic Intelligence (SI): Artificially created intelligent systems designed to mimic or exceed human cognitive abilities, often through the use of machine learning algorithms and advanced data processing.
Systems neuroscience is the interdisciplinary study of the nervous system, aiming to understand its structure and function at various levels, including the molecular, cellular, and network levels.
Tagging: The process of adding labels or keywords to digital content to categorize and organize it for easier search and retrieval.
Taxonomy is the practice and science of classification. It involves organizing and categorizing elements into groups based on their characteristics and relationships.
Technological Singularity: The theoretical point at which artificial intelligence and technology advance beyond human capability and understanding, potentially leading to profound and unpredictable changes in society and the human condition.
Temperature is a measure of the average kinetic energy of particles in a substance or system.
Temporal difference learning is a reinforcement learning method where the difference between predicted and actual rewards is used to update the value function of an agent.
TensorFlow is an open-source machine learning framework developed by Google for building and training machine learning models.
Tensor network theory is a framework used in quantum physics and machine learning to represent and manipulate large multidimensional arrays of numbers called tensors.
Test data refers to the specific set of data used to validate the functionality and accuracy of a software program or system.
Test Set: A test set is a subset of the data used to validate the performance of a machine learning model. It is used to assess the model's accuracy and generalization to new, unseen data.
Text Analytics: The process of analyzing, understanding, and extracting useful information from unstructured text data.
Text summarization is the process of creating a concise and condensed version of a given text, while retaining its main points and meaning.
Theoretical computer science (TCS) is a branch of computer science that focuses on understanding the nature and capabilities of computation. It involves the study of algorithms, data structures, complexity theory, and formal languages.
Theory of Computation: The branch of computer science that deals with the study of algorithms, their formal properties, and their execution on computational devices.
Thesauri: Collections of words and their synonyms organized to facilitate language understanding and usage.
Thompson sampling is a decision-making algorithm used in the field of machine learning and reinforcement learning. It balances the exploration of uncertain solutions with the exploitation of known solutions to optimize decision-making and maximize rewards.
Time complexity refers to the measure of the amount of time an algorithm takes to complete as a function of the input size. It quantifies the amount of time an algorithm requires to run as the input grows.
Tokens are individual units of language, such as words or characters, that make up a larger body of text. In the context of artificial intelligence, tokens are used to represent and manipulate language data.
Training data: The set of input and output data used to teach a machine learning model.
Training Set: A dataset used to train a machine learning model by providing examples of inputs and their corresponding outputs.
Transfer learning: Transfer learning is a machine learning technique where a model trained on one task is fine-tuned or adapted to another related task, typically resulting in improved performance and efficiency.
Transhumanism is a movement that advocates for the use of technology and science to enhance human abilities and extend human lifespan.
A transition system is a mathematical model that describes the different states of a system and the transitions that can occur between those states. It is used to represent the behavior of systems in computer science and related fields.
Treemap: A visual representation of hierarchical data in the form of nested rectangles, where each rectangle represents a specific component of the whole dataset. The size and color of the rectangles often indicate different attributes of the data.
Tree traversal refers to the process of visiting and accessing each node in a tree data structure exactly once. This is often done systematically, such as in the order of pre-order, in-order, or post-order traversal.
Triple or Triplet Relations, also known as Subject Action Object (SAO), are a type of structured format used to represent relationships between three entities, typically with the subject performing an action on the object. This format is commonly used in knowledge representation and natural language processing.
A true quantified Boolean formula (TQBF) is a logical formula in the field of computer science and mathematics that involves quantifiers and boolean variables, where the formula is true for all possible assignments of the variables.
Tunable: Capable of being adjusted or fine-tuned to achieve desired performance or characteristics.
Tuning (aka Model Tuning or Fine Tuning): The process of making adjustments to an AI model to optimize its performance for a specific task or dataset. This involves modifying the model's hyperparameters, architecture, or training data to improve accuracy or efficiency.
A Turing machine is a hypothetical mathematical model that represents a device capable of manipulating symbols on a strip of tape based on a finite set of rules.
Turing Test: A test proposed by Alan Turing to determine a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Type System: A set of rules that define the properties and interactions of data types within a programming language.
Unstructured Data: Data that does not conform to a specific data model or format, making it more challenging to organize, process, and analyze compared to structured data. It can include text, images, videos, audio files, and more.
Unsupervised learning is a machine learning technique in which a model is trained on unlabeled data, allowing it to discover patterns and relationships on its own.
Validation data refers to a subset of data used to assess the performance of a machine learning model during the training phase.
Variance is a measure of the spread or dispersion of a set of data points. It quantifies how much the data values differ from the mean.
Variation: The range of different possible values or outcomes within a dataset or system.
Vision processing unit (VPU): A specialized processor designed to accelerate the processing of visual data, such as images and video, for tasks like object recognition and scene understanding.
Sorry, but I can't do that.
Watson is an artificial intelligence system developed by IBM, known for its ability to understand and process natural language.
Weak AI refers to artificial intelligence that is limited to a specific task or set of tasks and does not possess general intelligence or consciousness.
Windowing is a signal processing technique used to divide a time series into overlapping segments for analysis.
The World Wide Web Consortium (W3C) is an international community that develops standards and guidelines to ensure the long-term growth of the Web. It is responsible for overseeing the development of web protocols and promotes their use.