In AI, analytics, data, tech and business, the terminology is constantly developing along with the technology. All definitions are in alphabetical order. Please click a letter to jump to a section.

If you have any suggestions for more, please feel free to let me know and they will be added.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

A

Algorithm – An algorithm is a collection of instructions used by computer software to complete a job or solve a problem. Machine learning and deep learning techniques are used by AI systems to learn from data and make judgements or predictions based on that data. They are an important part of artificial intelligence and are utilised in a variety of applications, ranging from simple tasks to complicated problem-solving.

Artificial Intelligence (AI) – AI, or artificial intelligence, is a field of computer science that focuses on creating intelligent machines that can simulate human-like thinking and behaviour. It is popular because it has the potential to revolutionise many industries and improve many aspects of our daily lives. For businesses, AI can be a powerful tool for automating complex processes and making more accurate and efficient decisions. For example, AI can be used for tasks such as customer support, product recommendation, and fraud detection, which can improve the customer experience and reduce operational costs. Additionally, AI can help businesses gain a competitive advantage by enabling them to analyze large amounts of data and make predictions about market trends and customer behavior.

Early work in the field was carried out by researchers such as Alan Turing, who developed the Turing test as a way of determining whether a machine could exhibit intelligent behavior. In the 1950s and 1960s, researchers such as John McCarthy and Marvin Minsky laid the foundations for modern AI by developing algorithms and techniques such as machine learning and natural language processing. Today, AI continues to evolve and be developed by researchers and engineers around the world.

AI cloud services – This combines the best of AI and cloud architectures with shared projects and workloads.

AGI (Artificial General Intelligence) – The ability of an intelligent software/agent to understand any intellectual task similar to the level of a human. Clearly some way off, this is has been the ultimate ideal of many AI developers. Commercially, if the agent had a similar intelligence levels as humans, there are wider possibilities with automation in the future. More on AGI

AI Engineering – The field of research that combines principles of system engineering, software engineering, computer science and human-centred design. As AI develops and strives to become more intelligent, it will have to adopt different forms of technology and knowledge systems. More about AI engineering

AI Maker and Teaching Kits – These are DIY AI systems that are designed for teaching.

AI Maturity – AI maturity in a business refers to the level of experience and capability that the business has in using artificial intelligence (AI) technologies. This can include factors such as the extent to which the business has integrated AI into its operations, the level of investment it has made in AI, the level of skill and expertise among its employees in using AI, and the results that the business has achieved through its use of AI. A business with high AI maturity is likely to have a strong understanding of how to use AI to improve its operations and achieve its goals, while a business with low AI maturity may be just starting to explore the potential uses of AI.

AI TRiSM – This is short for AI (T)rust, (Ri)sk, & (S)ecurity (M)anagement. This is a form of model governance. It is so that people can be confident in the models, making sure they have proper governance, trustworthy, fair, reliable, secure and data protected. In future, I see most organisations adopting this model as part of their AI governance. More on AI Trism by Fairly

Analytics – Analytics is the process of using data, mathematical models, and computational techniques to extract insights and knowledge from data. In a business context, analytics can help organizations to better understand their operations, customers, and markets, and to make more informed decisions. For example, a business might use analytics to identify trends and patterns in customer behavior, to forecast future demand for its products or services, or to optimize its supply chain and operations. By applying analytics to its data, a business can gain valuable insights that can help it to improve its performance, compete more effectively, and achieve its goals.

Autonomy – AI autonomy refers to the ability of an AI system to operate independently and make decisions without human intervention. AI autonomy is often used to describe systems that are capable of adapting to new situations and learning new tasks on their own, without the need for human guidance or supervision. The level of AI autonomy can vary, with some systems being more autonomous than others.

Autonomous Vehicles – These are vehicles that drive themselves without the input of a human driver. There are many types of self-driving depending on the level of automation. More on autonomous vehicles

AutoML – AutoML is the automation of the process of building and training machine learning models. This can save time and effort for data scientists and machine learning practitioners, and it can help businesses to quickly and easily develop and deploy AI models for a wide range of applications. AutoML can also help businesses to improve the performance and accuracy of their models by automatically selecting the best algorithms and hyperparameters for a given problem. This can lead to better decision making and improved business outcomes.

B

Backpropagation – Backpropagation is an algorithm used in machine learning to train artificial neural networks. It is a supervised learning algorithm that involves adjusting the weights of the connections between the neurons in a neural network in order to minimize the error between the predicted output and the desired output. The weights are adjusted using gradient descent, which calculates the gradient of the error function with respect to the weights and uses this information to update the weights in a way that reduces the error. Backpropagation is an important algorithm in deep learning and is used in a wide variety of applications, including image recognition and natural language processing.

Backward chaining – Backward chaining is a type of reasoning used in artificial intelligence and expert systems. In backward chaining, an AI system starts with the desired goal or outcome and works backwards, using a set of rules or knowledge to infer the steps that need to be taken to achieve the goal. This is in contrast to forward chaining, where an AI system starts with the available information and works forwards to infer new information or reach a conclusion. Backward chaining can be useful for solving complex problems or making decisions in uncertain environments.

Bias – AI bias refers to the tendency of AI systems to produce biased or unfair results. AI bias can occur when an AI system is trained on data that is biased or unrepresentative of the population, resulting in unfair or discriminatory decisions. AI bias can also occur when an AI system is designed or implemented in a way that favors certain groups or individuals over others. AI bias is a major concern in the development and use of AI systems, and efforts are being made to address it and ensure that AI is fair and unbiased.

Big data – Big data refers to large and complex datasets that are difficult to process using traditional data processing tools. Big data typically includes a high volume of structured and unstructured data, and may be generated by a variety of sources, such as sensors, social media, and internet of things devices. Big data is often used in machine learning and artificial intelligence, as these technologies are well-suited to analyzing and extracting insights from large and complex datasets.

C

Causal AI – This is a new form of AI that tries to identify the underlying causes of behaviour or even that creates insights that predictive models fail to provide. For example, most AI is good at predicting when X = Y. However, causal tries to understand why. Commercially if we know why choices are made then if we can re-create this intelligence, we can get to more accurate decision making. Having this level of understanding would lead to more accurate prediction or choices. More on causal AI

Chatbot – A chatbot is a computer program that is designed to simulate conversation with human users, typically over the internet or other digital communication channels. Chatbots are often used to provide customer service, answer frequently asked questions, or assist with online transactions. They can be accessed through a variety of platforms, including websites, messaging apps, and virtual assistants. Additionally, chatbots can be integrated with other business systems, such as e-commerce platforms, to automate tasks and improve efficiency.

Churn rate – Churn rate is a measure of the rate at which customers stop doing business with an organization. In the context of AI and business, the churn rate is the rate at which customers stop using the organization’s AI-powered products or services. The churn rate is typically expressed as a percentage, and it can be calculated by dividing the number of customers who have stopped using the organization’s AI-powered products or services by the total number of customers. A high churn rate can indicate that the organization’s AI-powered products or services are not meeting the needs or expectations of its customers, and it can be a sign that the organization needs to improve its AI-powered offerings in order to retain its customers.

Classification – Classification is a common task in the field of artificial intelligence, and it refers to the process of assigning a label or category to a given input data point. This is typically done by training an AI model on a labeled dataset, where each data point has been assigned a specific class or label. The model can then use this training data to learn the characteristics of each class and make predictions about the class of new, unseen data points.

Classification can help a business in a number of ways. For example, a classification model could be used to analyze customer data and automatically assign each customer to a specific demographic group or segment. This could be useful for targeted marketing campaigns, where different groups of customers could be targeted with customized promotions and offers. Classification could also be used in fraud detection, where the model could learn to identify patterns of fraudulent behavior and automatically flag suspicious transactions. In this way, classification can help businesses to make more informed decisions and improve their overall operations.

Cloud data warehouse – A cloud data warehouse is a type of database that is hosted on a cloud computing platform and is designed to store and manage large amounts of structured and unstructured data. This data can come from a variety of sources, including transactional systems, social media, sensors, and other sources. A cloud data warehouse is a good option for businesses because it offers the scalability and flexibility of the cloud, allowing the business to easily and cost-effectively store and manage large amounts of data. This can help businesses to gain valuable insights from their data and improve their decision making and operations.

Cognitive computing – Cognitive computing is a type of artificial intelligence that is designed to mimic the problem-solving abilities of the human brain. It involves the use of algorithms, machine learning, and natural language processing to enable computers to understand, learn, and reason like humans humans do. Cognitive computing systems are often used in applications such as natural language processing, image recognition, and decision making.

Composite AI – This is where you use different AI techniques or technologies to achieve better result. It can also be called multidisciplinary AI and designed to solve complex business problems. This may mean using smaller data, ML, Deep learning, NLP, CV (Computer Vision), descriptive statistics, Knowledge graphs. Also could incorporate DataOps, MLOps, APIOps, Data Mesh etc. The idea is that composite AI enables more human-like decision making thus reducing the need for big teams. More on Composite AI

Computer vision – Computer vision is a field of artificial intelligence that focuses on enabling computers to interpret and understand visual data from the world around them. It involves the development of algorithms and systems that can automatically analyze and interpret visual data, such as images and videos, in order to extract useful information and make decisions or predictions. Computer vision is commonly used in a wide range of applications, including image recognition, object detection, and image segmentation. More about computer vision.

Convolutional neural network (CNN) – A convolutional neural network (CNN) is a type of artificial neural network that is specifically designed for image recognition and processing. CNNs use convolutional layers, which apply a convolution operation to the input data, and pooling layers, which down-sample the data, to extract features from the input image. CNNs are commonly used in applications such as image classification, object detection, and image segmentation.

Correlation – In AI, correlation refers to the degree to which two variables are related or depend on each other. Correlation is typically measured using a statistic called the correlation coefficient, which ranges from -1 to 1 and indicates the strength and direction of the relationship between the two variables. Positive correlation indicates that the two variables are positively related, meaning that they tend to increase or decrease together. Negative correlation indicates that the two variables are inversely related, meaning that they tend to move in opposite directions.

Customer Experience (CX) – Customer experience is the overall impression that a customer has of a business, based on their interactions with that business. This can include a wide range of interactions, such as a customer’s experience with a business’s website, its physical stores or locations, its customer service team, and its products or services. A positive customer experience can lead to customer satisfaction and loyalty, while a negative customer experience can lead to customer dissatisfaction and a loss of business. The goal of many businesses is to provide a consistently positive customer experience in order to retain and attract customers.

Custom model – A custom AI model is a machine learning model that is specifically designed and trained to solve a particular problem or perform a specific task. Custom AI models are typically created by developers or data scientists who have expertise in machine learning and artificial intelligence. They are trained using a specific dataset and a set of desired outcomes, and are optimized to perform a specific task or solve a specific problem. Custom AI models can provide better performance and more accurate results than pre-trained models that are not tailored to a specific problem or task.

D

Data analyst – A data analyst is a professional who uses data and analytics to help businesses make better decisions and improve their operations. Data analysts are responsible for collecting, cleaning, and organizing data from a variety of sources, and they use a variety of tools and techniques to analyze this data and uncover insights and patterns. These insights can be used to inform business decisions and strategies, and they can help businesses to gain a competitive advantage. Data analysts work closely with other members of the business, including executives, managers, and other stakeholders, to help them understand and make use of the data and insights that are available to them.

Data architect – A data architect is a professional who is responsible for designing, developing, and managing an organization’s data architecture. Data architects work with data engineers, business analysts, and other stakeholders to define the data requirements of an organization and to develop and implement a data architecture that meets those requirements. They are responsible for ensuring that the organization’s data is organized, accessible, and secure, and that it can be effectively used to support business operations and decision making. Data architects often have a background in computer science, engineering, or a related field.

Data catalogue – A data catalogue is a centralized repository of metadata and other information about the data assets that are available within a business. A data catalogue typically includes details about the data sources, data types, data formats, and data relationships that are relevant to the business. It can also include information about the quality and governance of the data, as well as the business processes and analytics that are associated with the data.

A data catalogue is a valuable resource for businesses because it provides a single, comprehensive view of all the data assets that are available within the organization. This can help to improve data discovery and access, and it can also help to ensure that the data is being used effectively and efficiently. By making data more accessible and transparent, a data catalogue can help businesses to make better decisions and improve their overall performance.

Data-Centric AI – A new area of AI championed by Andrew Ng of Landing AI. This is where AI systems are focussing on data rather than the algorithms only. Evidence is showing that with good data, it can lead to 10 x faster modelling, 65% reduced time to deploy and 40% improved yield and accuracy. Commercially, better data makes better sense. With good data, companies can use less data for model training and therefore reducing time and costs. More on Data centric AI

Data engineer – A data engineer is a professional who is responsible for designing, building, and maintaining the infrastructure and systems that are used to manage and process data within a business. This can include tasks such as designing and implementing data pipelines, building data warehouses and data lakes, and developing and deploying data management and analytics systems. Data engineers often work closely with data analysts and other members of the business to ensure that the data infrastructure is able to support the data-related needs of the organization.

A data engineer can help a business by ensuring that the data infrastructure is scalable, efficient, and reliable. This can help to improve the performance and accuracy of the data analytics and other data-driven processes within the business. By providing a strong foundation for data management and analytics, a data engineer can help businesses to make better decisions and improve their overall operations.

Deep tech – Deep tech refers to technology that is based on scientific and engineering principles, and that often involves complex systems and a high level of specialization. This can include technologies such as artificial intelligence, biotechnology, and advanced materials. Deep tech is often distinguished from other types of technology, such as consumer-oriented technology, because it tends to be more focused on solving complex problems and creating new scientific and engineering capabilities. In a business context, deep tech can refer to the use of such technologies to develop new products, services, or business models, or to improve existing ones

Digital Ethics – This is the branch of ethics that deals with digital media. In AI, it is a system of moral principles and techniques intended to inform the development and use of AI.

Data fabric – A data fabric is a term used to describe a data management architecture that is designed to support the flow of data across an enterprise. A data fabric typically consists of a set of technologies and tools that are used to integrate, manage, and analyze data from a variety of sources. This can include data warehouses, data lakes, data pipelines, and other data management and analytics tools.

A data fabric can help a business by providing a single, unified view of all the data assets that are available within the organization. This can help to improve data discovery and access, and it can also enable real-time data analysis and decision making. A data fabric can also help to improve the scalability and flexibility of the data infrastructure, allowing the business to easily and cost-effectively store and manage large amounts of data. In this way, a data fabric can help businesses to gain valuable insights from their data and improve their overall performance.

Data Governance – Data governance is the process of establishing and maintaining policies and procedures for managing, using, and protecting an organization’s data assets. This can include tasks such as defining data ownership, establishing data quality standards, and implementing data security and privacy policies. Data governance is important because it helps to ensure that the data within an organization is accurate, consistent, and secure, and that it is being used in a way that is compliant with relevant regulations and laws.

Data governance can help a business by providing a framework for managing and using data in a responsible and effective manner. This can help to improve the quality and reliability of the data, and it can also help to reduce the risk of data breaches and other security incidents. By establishing and enforcing data governance policies, businesses can improve the trust and confidence of their customers, employees, and other stakeholders in the data and analytics processes within the organization.

Data Ingestion – Data ingestion is the process of transferring data from external sources into a data storage or processing system. In the context of a business, data ingestion typically refers to the process of collecting, cleaning, and organizing data from a variety of sources, such as transactional systems, sensors, social media, and other sources. This data can then be used for a variety of purposes, such as analytics, reporting, and decision making.

Data ingestion is an important part of the data management and analytics process in a business. It is the first step in the process of making data available for analysis and decision making, and it is crucial for ensuring that the data is accurate, complete, and in the right format. By carefully managing the data ingestion process, businesses can improve the quality and reliability of their data, which can in turn improve the accuracy and effectiveness of their analytics and decision making processes.

Decision Intelligence – This is a new area of data science that includes theories from decision theory, social science and managerial science. It was developed to improve commercial decision making in ML.  More on Decision Intelligence

Data labelling and annotation – Is the process of identifying raw data and adding meaningful labels on it. More on data labelling

Deep learning – Deep learning is a type of machine learning that involves the use of deep neural networks to learn from data. It is a powerful tool for solving complex problems and achieving high levels of accuracy in tasks such as image recognition, natural language processing, and predictive analytics. For businesses, deep learning can be a valuable tool for a wide range of applications. For example, deep learning can be used to improve customer service by building chatbots that can understand and respond to customer inquiries in natural language. It can also be used to improve product quality by building systems that can automatically detect and classify defects in manufacturing processes.

Deep learning can also be used to improve business operations and decision making by providing businesses with valuable insights and predictions based on data. For example, deep learning algorithms can be used to predict future demand for products or services, identify trends in customer behavior, or detect fraud or other anomalies in financial transactions.

Overall, deep learning is a powerful tool for businesses, providing them with the ability to solve complex problems, improve their operations, and make better decisions.

Data mesh – Data mesh is a data management architecture that organizes data around business domains and decentralizes data ownership. This can improve data discoverability and access and enable more agile decision making. Data mesh can help enterprises to gain valuable insights from their data and improve their overall performance.

Data mining – Data mining is the process of extracting useful and actionable information from large datasets. It involves the use of algorithms, statistical models, and other techniques to identify patterns, trends, and relationships in data, and to uncover hidden insights and knowledge. Data mining is commonly used in a wide range of applications, including business intelligence, marketing, and scientific research. It is an important component of artificial intelligence and machine learning, and is often used to support decision making and predictive analytics.

Data lake – A data lake is a central repository for storing and managing large and complex datasets. A data lake typically uses a flat architecture, where data is stored in its raw and unstructured format, without the need for a pre-defined schema or data model. This makes it easy to store and access data from a wide range of sources, including structured and unstructured data, and to quickly and easily analyze and process the data using a variety of tools and techniques. Data lakes are commonly used in big data and artificial intelligence applications, and are often integrated with data warehouses, data marts, and other data management systems.

Data pipeline – A data pipeline automates and streamlines the data management and analytics process by moving and transforming data from one system to another. This can reduce manual effort and improve efficiency, helping businesses to gain valuable insights from their data and improve their overall operations.

Data science – Data science is a field that involves using scientific methods, algorithms, and systems to extract knowledge and insights from data. Data scientists use a variety of tools and techniques, including machine learning, statistics, and visualization, to analyze and interpret large and complex datasets. They work with data from a wide range of sources, including sensors, social media, and databases, and use their findings to support business operations, decision making, and scientific research. Data science is a multi-disciplinary field that combines elements of computer science, statistics, and domain expertise.

Data scientist – A data scientist is a professional who uses scientific methods, algorithms, and systems to extract knowledge and insights from data. Data scientists use a variety of tools and techniques, including machine learning, statistics, and visualization, to analyze and interpret large and complex datasets. They work with data from a wide range of sources, including sensors, social media, and databases, and use their findings to support business operations, decision making, and scientific research. Data scientists typically have a strong background in computer science, statistics, or a related field, and often have expertise in a particular domain or industry.

Data stack – A data stack is like a giant puzzle with lots of different pieces that fit together to form a complete and functional system. The pieces of the puzzle include things like data storage, data processing, data visualization, and data analytics, and they all work together to help businesses make sense of their data and use it to improve their operations. Just like a puzzle, a data stack can be tricky to put together, but when it’s done right, it can be a beautiful and powerful thing. So grab your puzzle pieces and get ready to build your own data stack!

Deep learning – Deep learning is a type of machine learning algorithm that uses artificial neural networks with many layers of nodes to learn and make predictions from data. These networks are capable of learning complex patterns and relationships in data, and they can be trained on large datasets to make accurate predictions.

Deep learning can benefit a business in many ways. For example, a deep learning model could be used to analyze customer data and make personalized recommendations for products or services. This could improve the customer experience and lead to increased sales. Deep learning could also be used for image or speech recognition, allowing businesses to automatically process and analyze large amounts of visual or audio data. In this way, deep learning can help businesses to automate tasks, make better decisions, and improve their overall operations.

Deep Neural Network (DNN) – A deep neural network is a type of artificial neural network that uses multiple layers of interconnected nodes, or neurons, to process and analyze data. Deep neural networks are often used for complex tasks such as image recognition, natural language processing, and predictive analytics. They are able to learn and adapt to new data, and can achieve better performance and more accurate results than shallow neural networks with fewer layers. Deep neural networks are a key component of deep learning, a branch of machine learning that involves the use of multiple layers of neural networks to learn from data.

Deep reinforcement learning – Deep reinforcement learning is a type of AI that combines reinforcement learning with deep learning, a type of machine learning that uses neural networks with many layers of processing. This allows the model to automatically learn complex representations of data and make more accurate and efficient decisions. Deep reinforcement learning is useful for tasks that involve large amounts of data and complex decision making, such as playing games, controlling robots, or optimizing supply chain networks. It can help businesses improve their operations and gain a competitive advantage in the market.

DevOps – DevOps is a software development method that emphasizes collaboration, communication, and integration between software developers and other IT professionals. The goal of DevOps is to enable organizations to rapidly and reliably deliver software updates and improvements, while ensuring that the software remains stable and secure. In a business context, DevOps can help organizations to improve the speed and efficiency of their software development process, reduce the time and cost of software releases, and improve the quality and reliability of their software. By adopting a DevOps approach, businesses can become more agile and responsive to the needs of their customers and the market.

Digital Twins – Digital twins are digital representations of physical objects or systems. They are often used in manufacturing to create virtual models of production systems, processes, or products, which can be used for a variety of purposes, including design, simulation, and optimization. By creating digital twins of their production systems, processes, and products, manufacturers can improve their operations, reduce waste, and increase productivity. Digital twins can also be used to monitor the performance of physical systems in real time, providing valuable data that can be used to optimize operations and improve product quality.

For businesses, digital twins can be a valuable tool for improving operations, reducing waste, and increasing productivity. For example, a digital twin of a production line could be used to simulate and optimize the production process, identify potential bottlenecks or inefficiencies, and test new equipment or processes before they are implemented. This can help businesses to improve their operations, reduce waste, and increase productivity.

Digital twins can also be used to monitor the performance of physical systems in real time, providing businesses with valuable data that can be used to identify problems and optimize operations. For example, a digital twin of a production line could be used to monitor equipment performance and alert maintenance teams when equipment is in need of repair or maintenance.

Overall, digital twins are a valuable tool for businesses, providing them with the ability to simulate and optimize their operations, improve product quality, and increase productivity.

Data wrangling – Data wrangling is like taming a wild and unruly beast. It involves taking a messy and unstructured dataset and turning it into something that is clean, organized, and ready for analysis. It’s a tedious and sometimes frustrating process, but in the end, it can be rewarding and even a little bit fun. So grab your lasso and your spurs, and get ready to wrangle some data!

E

Edge AI – This is where ML algorithms are processed at a local level often by the hardware device rather than centrally in the cloud. More about Edge AI

Echo State Network (ESN) – An echo state network is a type of recurrent neural network that is designed to model the dynamics of complex systems. It uses a reservoir of neurons that are connected in a random, dense, and recurrent manner, and uses a linear readout layer to map the network’s internal state to the desired output.

For businesses, echo state networks can be useful for a variety of purposes, such as predicting future demand for products or services, analyzing customer feedback, or understanding market trends. They can provide valuable insights and help businesses to make better decisions, improve their operations, and stay competitive in the marketplace.

ELT – Server ELT (Extract, Load, Transform) is a data processing architecture that uses a server-side component to perform the data transformation and preparation steps in a data pipeline. In a Server ELT architecture, data is extracted from the source system and loaded into a central server, where it is transformed and prepared for analysis or further processing. This architecture is different from traditional ELT architectures, which typically perform the data transformation and preparation steps on the client side.

Server ELT can benefit a business by providing a more scalable and flexible approach to data processing. Because the data transformation and preparation steps are performed on a central server, Server ELT can handle larger volumes of data and more complex data transformation tasks than traditional ELT architectures. This can help businesses to gain valuable insights from their data more quickly and easily, and it can also improve the performance and reliability of the data processing process.

Entity annotation – Entity annotation is the process of labeling and identifying entities, or specific objects or concepts, in a dataset. It is an important step in training and evaluating AI and machine learning systems, and involves identifying and labeling named entities in text or visual data. Entity annotation is typically performed by human annotators or automated algorithms.

Entity extraction – Entity extraction is the process of automatically identifying and extracting entities, or specific objects or concepts, from a dataset. In natural language processing, entity extraction involves identifying named entities, such as people, organizations, and locations, in text data. In computer vision, entity extraction involves identifying and extracting objects, scenes, and other visual entities from images or videos. Entity extraction is an important step in the process of training and evaluating artificial intelligence and machine learning systems, and can be performed using a variety of algorithms and techniques.

Ethics – AI ethics refers to the ethical principles and guidelines that should be followed when designing, developing, and using AI systems. AI ethics covers a wide range of topics, including fairness, transparency, accountability, privacy, and the potential impacts of AI on society and individuals. The goal of AI ethics is to ensure that AI is developed and used in a way that is fair, transparent, and responsible, and that considers the potential consequences of AI on society and individuals.

F

Feature engineering – Feature engineering is the process of selecting and creating the input features that are used to train an AI model. This process involves selecting relevant and informative features from the raw data and transforming them into a form that is suitable for the model to use. Feature engineering can benefit a business by improving the performance of the AI model and making it more effective at solving a specific problem. By carefully selecting and engineering the features that are used to train the model, businesses can improve the accuracy and reliability of the model’s predictions, leading to better decision making and improved business outcomes.

False negatives – False negatives are instances where a model predicts that an event will not occur, but the event actually does occur. In a binary classification model, false negatives are instances where the model predicts that an example belongs to the negative class, but the example actually belongs to the positive class. False negatives are a type of error, and can have serious consequences, particularly in applications such as medical diagnosis or fraud detection. Reducing the number of false negatives is an important goal in the development and evaluation of machine learning models.

F score – The F score, also known as the F1 score or F-measure, is a measure of a model’s accuracy and precision in binary classification. It is calculated as the harmonic mean of the model’s precision and recall, and takes into account both the number of true positives and true negatives, as well as the number of false positives and false negatives. The F score is often used to evaluate the performance of a machine learning model, and is commonly used in applications such as spam detection and fraud detection. A high F score indicates that the model has good precision and recall, and is making accurate predictions.

Foundation models – These are large AI models trained on vast amount of unlabelled data. More info about Foundation models

Frameworks – AI frameworks are software platforms that provide a set of tools and libraries for building, training, and deploying AI models. AI frameworks make it easier for developers to create and deploy AI systems by providing a set of standard components and tools that can be used to build and train AI models. Some common AI frameworks include TensorFlow, PyTorch, and Keras. AI frameworks can be used for a wide variety of applications, including computer vision, natural language processing, and predictive analytics.

G

Geoffrey Hinton – Geoffrey Hinton is a British computer scientist and cognitive psychologist who is known for his work on artificial intelligence and machine learning, particularly in the area of deep learning. He is currently a professor at the University of Toronto and the chief scientist at Google Brain. Hinton has made significant contributions to the field of AI, including the development of the backpropagation algorithm, which is a key technique used in training artificial neural networks. He is also the co-founder of the Vector Institute for Artificial Intelligence, a research institute focused on advancing the field of AI. Hinton has received numerous awards and honors for his work, including the Turing Prize, which is considered the “Nobel Prize” of computer science.

Generative adversarial network (GAN) – A generative adversarial network (GAN) is a type of machine learning model that involves two neural networks, a generator and a discriminator, working together to generate new data that is similar to a training dataset. The generator network generates synthetic data, while the discriminator network attempts to distinguish the synthetic data from the real data. The two networks are trained together, with the generator trying to fool the discriminator, and the discriminator trying to correctly identify the synthetic data.

For businesses, GANs can be useful for a variety of purposes, such as generating synthetic images or text, improving the quality of image or video datasets, or creating new and unique product designs. GANs can provide businesses with valuable insights and creative solutions that can help to improve their operations and stay competitive in the marketplace.

General AI – General AI, also known as strong AI or human-level AI, is a type of artificial intelligence that has the ability to perform any intellectual task that a human being can. It is characterized by its ability to adapt to new tasks and environments, and to reason and make decisions based on incomplete or uncertain information.

For businesses, general AI could be useful for a wide range of applications, such as decision making, problem solving, and creative tasks. It could help businesses to improve their operations, make better decisions, and stay competitive in the marketplace. Additionally, general AI could also open up new business opportunities and create new markets, by enabling businesses to solve previously intractable problems and create new products and services

Generative AI – A broad label description of AI that uses unsupervised learning algorithms to create images, video, audio, text or code. More on Genrative AI

Gradient descent – Gradient descent is an optimization algorithm used to train AI models by finding the minimum value of a cost function. It does this by iteratively moving in the direction of steepest descent, calculating the gradient at each point and taking a step in the opposite direction. This process is repeated until the minimum value is found or a preset number of iterations is reached. Gradient descent is commonly used in neural network models and can help to improve the performance and accuracy of the model.

H

Hyperparameter – A hyperparameter is a parameter that is set before training a machine learning model, and that cannot be learned from the data. Hyperparameters are used to control the behavior of the model, and can have a significant impact on the model’s performance and accuracy. Examples of hyperparameters include the learning rate, the regularization coefficient, and the number of layers or nodes in a neural network. Hyperparameters are often set through a process called hyperparameter optimization, which involves choosing the values of the hyperparameters that give the best performance on a validation dataset.

I

ImageNet – ImageNet is a large dataset of labeled images that is commonly used to train machine learning algorithms for image recognition and classification. It is useful for businesses because it provides a wide range of pre-labeled images that can be used to train algorithms for tasks such as visual search, product identification, and behavior analysis. This can help businesses improve the accuracy and efficiency of their image recognition systems. More on ImageNet here

Image recognition – Image recognition is the ability of a machine or software to identify and classify objects, people, scenes, and actions in images. It is good for business because it can be used for tasks such as analyzing customer behavior, automating product categorization, and improving the accuracy of visual searches.

Image segmentation – Image segmentation is the process of dividing an image into multiple segments or regions using algorithms. This helps businesses by allowing them to automatically analyze and understand the content of images, such as identifying specific objects or analyzing the behavior of people in a scene. This can be used for tasks such as product identification, visual search, and behavior analysis.

Intelligent Applications (apps) – These are AI enabled apps that provide rich, adaptive and personalised user experience.

J

John McCarthy – John McCarthy was an American computer scientist and cognitive scientist who is known for his contributions to the field of artificial intelligence (AI). He was born in 1927 in Boston, Massachusetts, and he earned his PhD in mathematics from Princeton University in 1951. McCarthy is credited with coining the term “artificial intelligence” in 1955, and he is considered one of the founders of the field of AI. He was a pioneer in the development of AI algorithms and systems, and he is known for his work on the Lisp programming language, which is still widely used in AI research. McCarthy was a professor at Stanford University, where he taught and conducted research on AI until his death in 2011.

Junction tree algorithm – A junction tree algorithm is a type of probabilistic graphical model that is used for representing and reasoning about the relationships among variables in a system. It is good for businesses because it allows them to efficiently and accurately analyze complex data by organizing it into a tree-like structure and making inferences based on probabilistic dependencies. This can be used for tasks such as fraud detection, risk analysis, and decision making. It is also known as a clique tree algorithm or a belief propagation algorithm.

K

Knowledge graphs (KG’s) – Knowledge graphs access or integrate data sources, add context or data driven depths using ML. They act as a bridge between humans and systems. More on knowledge graphs

L

M

MATLAB – MATLAB is a high-level programming language and technical computing environment for numerical computation and visualization. It is commonly used in engineering, science, and mathematics to analyze and design systems and to develop algorithms. MATLAB provides a wide range of tools and functions for working with data, including tools for signal processing, image and video processing, and machine learning. It is used by millions of users worldwide in academia, industry, and government.

Machine intelligence –  Machine intelligence refers to the ability of a machine or computer program to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision making. This is achieved through the use of algorithms and techniques such as machine learning, which allow machines to improve their performance on these tasks over time.

Machine translation – AI machine translation is a type of technology that uses machine learning algorithms to automatically translate text from one language to another. This is useful because it allows businesses to quickly and accurately translate large volumes of text, such as customer support inquiries or product descriptions, without the need for human translators. This can improve the efficiency and accuracy of language-related tasks and make it easier for businesses to operate in a global market.

Marvin Minsky – Marvin Minsky was an American cognitive scientist and computer scientist who was a pioneer in the field of artificial intelligence (AI). He was born in 1927 in New York City, and he earned his PhD in mathematics from Princeton University in 1954. Minsky is known for his work on artificial neural networks, which are a type of machine learning algorithm that is inspired by the structure of the human brain. He is also known for his pioneering work on the development of AI systems, including the first AI program to solve a geometric theorem and the first AI program to prove theorems in geometry. Minsky was a professor at the Massachusetts Institute of Technology (MIT), where he taught and conducted research on AI until his death in 2016

MLOPS – MLOps, or machine learning operations, is the practice of applying DevOps principles and techniques to the development, deployment, and maintenance of machine learning (ML) systems. This can help businesses by allowing them to automate and streamline the ML development process, which can improve the speed, quality, and reliability of ML models. MLOps can also enable businesses to monitor and manage their ML systems in production, which can help them identify and address problems quickly and ensure that their ML systems continue to perform well over time.

Model – An AI model is a mathematical representation of a problem or task that is used by an AI system to make predictions or decisions. These models are typically trained on a dataset that contains examples of the problem or task, and they use this training data to learn the underlying patterns and relationships in the data. Once the model has been trained, it can be used to make predictions or decisions on new, unseen data.

AI models can benefit a business in a number of ways. For example, an AI model could be used to analyze customer data and make personalized recommendations for products or services. This could improve the customer experience and lead to increased sales. AI models could also be used for fraud detection or predictive maintenance, allowing businesses to automate these tasks and improve their operations. In general, AI models can help businesses to make better decisions and improve their overall performance.

N

Natural language generation (NLP) – Natural language generation is a type of natural language processing (NLP) technology that uses AI algorithms to automatically generate written or spoken language. This can be beneficial to businesses because it allows them to automatically produce large amounts of high-quality text, such as reports, summaries, or responses to customer inquiries. Natural language generation can help businesses improve the efficiency and accuracy of language-related tasks, such as data analysis and customer support, and make it easier for them to operate in a global market. It can also be used to create personalized and engaging content for customers, which can improve the customer experience and drive sales.

Neuromorphic Computing ModelOps – This is a specific AI that this modelled after the human brain and its nervous system.

Natural Language Processing (NLP) – This an aspect of linguistics, computer science and AI about the interactions between computers and human languages. Most NLP systems allow computers to understand text and spoken word in the same way humans would understand. More about NLP

Neural network (NN) – A neural network is a type of machine learning algorithm that is modeled after the structure and function of the human brain. It is composed of multiple layers of interconnected nodes, which process and transmit information in a way that is similar to how neurons in the brain communicate. Neural networks can help businesses by allowing them to automate complex processes and make predictions based on large amounts of data. For example, neural networks can be used for tasks such as image and speech recognition, natural language processing, and fraud detection, which can improve the accuracy and efficiency of these tasks and enable businesses to gain insights from their data.

O

Operational AI System – This is a type of intelligent system designed for specific real-world commercial applications like text analytics, image recognition etc. It is often miniature domain specific AI system.  More on Operational AI systems

Outlier – An AI outlier is a data point that is significantly different from the other data points in a dataset. In the context of AI and machine learning, these data points can be problematic because they can negatively impact the performance of an AI model. To avoid this problem, it is often necessary to identify and remove outliers from the training dataset before training an AI model.

Overfitting – Overfitting is a problem that can occur in machine learning when a model is trained on a limited amount of data and becomes too complex and specialized to the training data. This can lead to poor performance on new, unseen data, and can result in the model making inaccurate predictions or decisions. Overfitting can be avoided by using regularization techniques, such as limiting the complexity of the model or using cross-validation to assess its performance on multiple subsets of the training data.

Outlier – An AI outlier is a data point that is significantly different from the other data points in a dataset. In the context of AI and machine learning, these data points can be problematic because they can negatively impact the performance of an AI model. To avoid this problem, it is often necessary to identify and remove outliers from the training dataset before training an AI model.

P

Pattern recognition – Pattern recognition is the ability of a machine or software to identify and classify patterns or regularities in data. It is a fundamental task in machine learning and AI, and is often used for tasks such as image and speech recognition, natural language processing, and anomaly detection. In a business setting, pattern recognition can be used to automate processes and make more accurate and efficient decisions. For example, it can be used to identify patterns in customer behavior or market trends, which can help businesses improve their products and services and gain a competitive advantage. Additionally, pattern recognition can be used to detect fraud and other anomalies in data, which can help businesses reduce risks and improve their operations.

Platform as a Service (PaaS) – PasS is a type of cloud computing service that provides a platform for the development, deployment, and management of applications and services. PaaS typically includes a range of tools and services, such as development frameworks, databases, and middleware, that developers can use to build and deploy their applications without having to manage the underlying infrastructure. In a business context, PaaS can provide a cost-effective and scalable way for organizations to develop and deploy new applications and services, without the need for significant upfront investment in hardware and software. By using PaaS, businesses can focus on developing their applications and services, while leaving the management and maintenance of the underlying platform to the PaaS provider.

Predictive analytics – Predictive analytics is the use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data. It is a type of analytics that is often used to make predictions about customer behavior, market trends, and other business-related phenomena. Predictive analytics can help businesses by providing them with insights about what is likely to happen in the future, which can inform their decision-making and strategy. For example, predictive analytics can be used to identify likely customer segments or predict the demand for a product, which can help businesses improve their marketing and operations. Additionally, predictive analytics can be used to identify potential risks and opportunities, which can help businesses stay ahead of their competitors and make more informed decisions.

Prescriptive analytics – Prescriptive analytics is a type of analytics that uses AI algorithms and other advanced techniques to automatically generate recommendations or actions based on data and models. This is different from descriptive or predictive analytics, which focus on summarizing and predicting data, respectively. Prescriptive analytics can help businesses by providing them with specific, actionable recommendations that can improve their operations and decision making. For example, prescriptive analytics can be used to automatically generate marketing campaigns, supply chain plans, or risk management strategies, which can help businesses increase efficiency and reduce costs. Additionally, prescriptive analytics can be used to optimize decision-making processes, such as by providing decision makers with multiple options and their associated risks and rewards.

Predictive modeling – Predictive modeling is the process of using AI and machine learning algorithms to build a model that can make predictions about future events or outcomes. This is typically done by training the model on a dataset that contains historical data, and the model uses this data to learn the underlying patterns and relationships in the data. Once the model has been trained, it can be used to make predictions about future events based on new, unseen data. Predictive modeling can help an enterprise by allowing it to make more informed decisions and improve its overall operations.

Physics-informed AI (ML) – This is the exploration of integrating data and mathematical physics models. The idea that Neural Networks NN become PINN’s and laws can govern data sets in learning process. This is a exciting prospect, where different forms of intelligence is combined for more accurate predictions or decisions.  More on Physics-informed AI

Q

Quantum computing – Quantum computing is a type of computing that is based on the principles of quantum mechanics and uses quantum bits, or qubits, to perform calculations that are significantly faster and more powerful than those of classical computers. It has the potential to be good for businesses because it can enable the solution of complex problems that are currently intractable using classical computers, such as optimizing supply chain networks, analyzing financial data, and simulating complex systems. This can help businesses improve their operations and decision making, and gain a competitive advantage in the market. Additionally, quantum computing can be used to enhance the security of business systems and protect against cyber attacks.

R

Responsible AI – This is the framework that documents how an organisation is addressing the challenges around AI from an Ethical and Legal point of view. Most companies should be addressing these important issues.

Recurrent neural network (RNN) – A recurrent neural network (RNN) is a type of neural network that is designed to process sequential data, such as time series, natural language, or speech. It is composed of multiple interconnected nodes that form a loop, allowing the network to retain information from previous time steps and use it to make predictions about future events. RNNs can help businesses by allowing them to automate and improve the accuracy of tasks that involve sequential data, such as language translation, speech recognition, and time series forecasting. This can help businesses improve their operations and gain insights from their data.

Recommendation engine – A recommendation engine (Sometimes known as recommendation system or a recommender system) is a type of AI system that uses algorithms to automatically generate personalized recommendations for users based on their interests and behavior. Recommendation engines are commonly used by businesses to improve the customer experience and drive sales. For example, a recommendation engine can be used to suggest products that a customer may be interested in based on their previous purchases or browsing history, which can help businesses increase customer engagement and loyalty. Additionally, recommendation engines can be used to automatically generate personalized content, such as email newsletters or social media posts, which can help businesses improve their marketing and reach a wider audience.

Reinforcement learning – Reinforcement learning is a type of machine learning that involves training a model to make decisions in an environment in order to maximize a reward or objective. It is useful for businesses because it allows them to automatically train models to perform complex tasks, such as controlling robots or playing games, without the need for explicit instructions. Reinforcement learning can help businesses improve the accuracy and efficiency of their operations, and can be used for tasks such as optimizing supply chain networks, scheduling production, and automating decision making. It can also be used to create AI systems that can learn and adapt to changing environments, which can help businesses stay ahead of their competitors and respond to market challenges.

Robotic process automation (RPA) – Robotic process automation (RPA) is the use of software robots, or bots, to automate routine and repetitive tasks. RPA is good for businesses because it allows them to reduce the workload of their employees and improve the efficiency and accuracy of their operations. For example, RPA can be used to automate data entry, customer support, and other tasks that are currently performed manually, which can reduce the need for human labor and improve the speed and quality of these tasks. Additionally, RPA can be integrated with other business systems, such as CRM or ERP systems, to automate more complex processes and improve the overall efficiency of the business.

S

Sentiment analysis – Sentiment analysis is the use of natural language processing (NLP) techniques to automatically analyze the sentiment or emotion expressed in text. It is commonly used to analyze customer feedback or social media posts, in order to understand how customers feel about a product, service, or brand. Sentiment analysis can help businesses by providing them with insights into their customers’ opinions and emotions, which can inform their decision-making and strategy. For example, sentiment analysis can be used to identify customer satisfaction trends or detect negative sentiments, which can help businesses improve their products and services and address customer concerns. Additionally, sentiment analysis can be used to track and respond to customer feedback in real time, which can improve the customer experience and drive sales.

State Of The Art (SOTA) – It refers to the most recent and sophisticated models and algorithms’ current top performance on a certain job or dataset. SOTA may also refer to the field’s overall present situation in terms of progress and advancements.

Smart robots – These are AI systems that build their knowledge from learning about their environment and experience.

Strong AI (AGI) – Strong AI, also known as general AI, is a type of artificial intelligence that is capable of learning and solving any problem that a human being can solve. Strong AI is not limited to a specific task or problem, and is capable of adapting to new situations and tasks. It is a theoretical concept that has not yet been achieved, but is the ultimate goal of many AI researchers. In contrast, weak AI, also known as narrow AI, is designed to perform a specific task or solve a specific problem, and is not capable of general intelligence or learning new tasks on its own.

Structured data – Structured data is data that is organized in a well-defined and predictable format, such as tables or databases. It is commonly found in business systems, such as CRM or ERP systems, and can be extracted and processed using software tools and algorithms. Structured data can come from a variety of sources, such as customer transactions, inventory records, or employee records. It can be used to support a wide range of business activities, such as data analysis, machine learning, and decision making. Structured data is commonly used for tasks such as data analysis and decision making, whereas unstructured data may contain valuable insights that can be extracted using natural language processing or other AI techniques.

Supervised learning – Supervised learning is a type of machine learning in which a model is trained on a labeled dataset, where the correct outputs or labels are provided for each example in the dataset. The model uses this information to learn a function that maps the input data to the corresponding outputs, and can then be used to make predictions on new, unseen data. Supervised learning is beneficial for businesses because it allows them to train models to perform tasks such as classification, regression, or forecasting, which can improve the accuracy and efficiency of these tasks and enable businesses to gain insights from their data. Additionally, supervised learning can be used to automate complex processes, such as fraud detection or customer support, which can reduce the need for human labor and improve the overall performance of the business.

Synthetic data – This is data/information that is produced artificially rather from the original source. More about Synthetic data

T

TensorFlow – TensorFlow is an open-source software library for machine learning and AI. It provides a flexible and powerful platform for training, deploying, and managing machine learning models. TensorFlow is widely used by researchers, businesses, and other organizations for a variety of tasks, including image and speech recognition, natural language processing, and predictive modeling. It is known for its ease of use, performance, and scalability, and has become a popular choice for developing and deploying machine learning and AI systems.

Training data – Training data is a dataset that is used to train a machine learning model. It consists of input data, which is the data that is used to make predictions, and output data, which is the corresponding correct labels or outputs. The model uses this information to learn a function that maps the input data to the correct outputs, and can then be used to make predictions on new, unseen data. Training data is a crucial component of machine learning, as the quality and quantity of the data can significantly impact the performance of the model.

The Turing test – The Turing test is a test of a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. It is named after the mathematician and computer scientist Alan Turing, who proposed it as a way to determine whether a machine has the ability to think and act like a human. The test involves a human evaluator interacting with a machine and a human subject, without knowing which is which. If the evaluator cannot tell the machine and the human apart based on their responses, the machine is said to have passed the Turing test. The Turing test is a controversial concept, as some argue that it is not a sufficient test of intelligence, and others argue that it is impossible to achieve.

U

Unstructured data – Unstructured data is data that is not organized in a well-defined and predictable format, such as tables or databases. It may take the form of text, images, audio, or video, and may not have a clear structure or schema. Unstructured data is commonly found in business systems, such as social media, customer feedback, or email, and requires more sophisticated techniques, such as natural language processing or image recognition, to extract useful information. Unstructured data can provide valuable insights into a business and its customers, and can be used for tasks such as sentiment analysis, market research, or customer segmentation.

Unsupervised learning – Unsupervised learning is a type of machine learning in which a model is trained on an unlabeled dataset, where the correct outputs or labels are not provided. Instead, the model uses the input data to learn the underlying structure and patterns of the data, without the guidance of correct labels. Unsupervised learning can help enterprises by allowing them to discover hidden patterns and trends in their data, which can provide valuable insights and inform their decision-making and strategy. For example, unsupervised learning can be used to identify customer segments, detect anomalies, or generate recommendations, which can help businesses improve their operations and gain a competitive advantage in the market. Additionally, unsupervised learning can be used to pre-train models for downstream tasks, such as supervised learning, which can improve the performance of these tasks.

V

Validation data – Validation data is a term used in the field of artificial intelligence to refer to a dataset that is used to evaluate the performance of an AI model. This data is used to verify that the model is able to accurately make predictions or decisions based on the data that it is given. Validation data is typically a subset of the training data that is held back from the model during training and only used for validation purposes. The validation data is used to tune the model’s hyperparameters and ensure that it is not overfitting to the training data.

Variance – Variance is a term used in the field of artificial intelligence to refer to the difference between the performance of an AI model on different datasets. High variance in an AI model can be a sign of overfitting, where the model has memorized the specific details of the training data and is not able to generalize well to new data. This can lead to poor performance on unseen data and can make the model less useful in real-world applications. To reduce variance in an AI model, it is often necessary to use regularization techniques, such as weight decay, or to use a larger and more diverse training dataset.

W

Week AI – Weak AI, also known as narrow AI or applied AI, refers to artificial intelligence systems that are designed to perform a specific task or a narrow set of tasks. These systems are able to simulate human-like intelligence for a particular task, but they are not capable of general intelligence or the ability to perform any intellectual task that a human being can do.

Weak AI can benefit a business by allowing it to automate specific tasks and processes, improving efficiency and reducing the need for human labor. For example, a weak AI system could be used in a customer service application to answer common questions and provide basic information, freeing up human customer service agents to handle more complex inquiries. This can improve the overall customer experience and reduce the workload for human employees. Other potential applications for weak AI in business include data analysis, fraud detection, and predictive maintenance.

X

Y

Yann LeCun – Yann LeCun is a French computer scientist and electrical engineer who is known for his work on artificial intelligence, particularly in the areas of computer vision and deep learning. He is currently the Chief AI Scientist at Facebook and the Silver Professor of Computer Science, Data Science, Neural Science, and Electrical and Computer Engineering at New York University. LeCun has made significant contributions to the field of AI, including the development of the convolutional neural network, a type of machine learning algorithm that has been widely used in image and speech recognition. He has also been a leading advocate for the use of AI to address some of the world’s most challenging problems, including climate change and the need for sustainable development.

Yoshua Bengio – Yoshua Bengio is a Canadian computer scientist and professor at the University of Montreal. He is known for his research on artificial intelligence and machine learning, particularly in the areas of deep learning and artificial neural networks. Bengio is considered one of the pioneers of deep learning, and his work has had a major impact on the field of AI. He has received numerous awards and honors for his contributions to AI, including the Turing Prize, which is considered the “Nobel Prize” of computer science. In addition to his research, Bengio is also the co-founder of several AI-focused companies and organizations, including Element AI and the Montreal Institute for Learning Algorithms (MILA).

Z