Skip to Main Content
Chat loading...

AI for Research

AI terminology

Common AI Terminology

Algorithm

An algorithm is a set of rules or instructions designed to solve a specific problem or perform a specific task in a finite number of steps. In the context of AI, algorithms play a crucial role in training machine learning models to learn patterns from data and generate accurate outputs.

 

Artificial intelligence (AI)

AI is a field of computer science research focused on creating intelligent machines and tools that can perform tasks that typically require human intelligence.

 

Chatbot

A chatbot is an AI powered tool designed to simulate human-like conversations with users via text or speech interfaces. Chatbots employ natural language processing and machine learning algorithms to understand, interpret, and respond to users. Examples of popular chatbots include Siri (Apple), Alexa (Amazon), and Google Assistant.

 

Completions

Completions are the output produced by AI in response to a given input or prompt. When a user inputs a prompt, the AI model processes it and generates text that logically follows or completes the given input. These completions are based on the patterns, structures, and information the model has learned during its training phase on vast datasets.

 

Deep Learning

A subfield of machine learning where neural networks with multiple layers (deep networks) learn from large amounts of data to perform complex tasks such as image classification, speech recognition, and natural language processing. Deep learning models can automatically discover and learn hierarchical representations from raw input data, improving accuracy and efficiency in various AI applications.

 

Generative AI (GenAI)

GenAI refers to artificial intelligence systems that can generate new content—such as texts, images, audio, and video—in response to prompts by a user, after being trained on an earlier set of data. GenAI can produce digital art and images that appear like photos, or allow users to generate videos from text, using AI avatars. Large Language Models, on the other hand, generate text by predicting the next word based on patterns learned from vast amounts of data.

 

Generative Pre-Trained Transformers (GPT)

GPTs are a type of advanced artificial intelligence model primarily used for natural language processing tasks. GPT models can efficiently process and generate human-like text by learning from vast amounts of data. The “pre-trained” aspect refers to the initial extensive training these models undergo on massive amounts of texts, allowing them to understand and predict language patterns. This pre-training equips the GPT models with a broad understanding of language, context, and aspects of world knowledge.

The generative aspect is important to remember – these tools are designed to generate human-like responses rather than, for example, a Google search which regurgitates information.

 

Hallucinations

Hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. The concept of AI hallucinations underscores the need for critical evaluation and verification of AI-generated information, as relying solely on AI outputs without scrutiny could lead to the dissemination of misinformation or flawed analyses.

 

Large Language Model (LLM)

A model of artificial intelligence that uses deep learning algorithms to process and generate human-like language. They are trained on large datasets of text and can perform a wide range of language tasks such as translation, summarization, and text generation.

 

Machine Learning

A subset of artificial intelligence where computer systems can learn from data, recognize patterns, and make predictions or decisions without being explicitly programmed. Machine learning algorithms improve as they process more data, enabling applications like image recognition, speech translation, and recommendation systems.

 

Natural Language Programming (NLP)

NLP is a field at the intersection of computer science, artificial intelligence, and linguistics, focused on enabling computers to understand, interpret, and generate human language in a way that is both meaningful and useful. It involves the development of algorithms and systems that can analyze, comprehend, and respond to text or voice data in a manner similar to how humans do.

 

Neural Network

A computational model inspired by the structure and functioning of the human brain, consisting of interconnected nodes or "neurons" or layers. Neural networks can learn from data, recognize patterns, and make predictions or decisions by adjusting the strength of connections between neurons, enabling deep learning applications such as image recognition and natural language processing.

 

Prompts

A prompt is the input given to an AI model to initiate or guide its generation process. This input acts as a directive or a set of instructions that the AI uses to produce its output. Prompts are crucial in defining the nature, scope, and specificity of the output generated by the AI system.

 

Prompt Engineering

The systematic process of designing clear, contextually relevant, and actionable prompts for GenAI. These prompts serve as cues or instructions that guide the GenAI models' behaviours, influencing the generation of completions. Formulating clear, effective, and unbiased prompts is crucial to obtaining the best possible completions.

 

Training

Training is the process by which a machine learning model learns to perform a specific task. This is achieved by exposing the model to a large set of data, known as the training dataset, and allowing it to iteratively adjust its internal parameters to minimize errors in its output.