Self-supervised learning, a cutting-edge technique in artificial intelligence, empowers machines to discover intrinsic patterns and structures within data, mimicking the human ability to learn from context and experience rather than by explicit instruction.
Table of contents
- What is self-supervised learning?
- Self-supervised compared to other machine learning types
- How self-supervised learning works
- Types of self-supervised learning
- Applications of self-supervised learning
- Advantages of self-supervised learning
- Disadvantages of self-supervised learning
What is self-supervised learning?
Self-supervised learning is a type of machine learning (ML) that trains models to create their own labels—that is, explicitly paired inputs and outputs—using raw, unlabeled data. Unlike supervised learning, which requires a significant amount of labeled data, self-supervised learning generates pseudo-labels (artificial labels) from the data itself. This technique gives the model the goal orientation and measurability of a supervised learning approach, plus unsupervised learning’s ability to make useful conclusions from massive amounts of unlabeled data.
Machine learning is a subset of artificial intelligence (AI) that uses data and statistical methods to build models that mimic human reasoning rather than relying on hard-coded instructions. Self-supervised learning leverages the vast amounts of unlabeled data available, making it a powerful approach for improving model performance with minimal manual intervention. In fact, today’s major generative AI text and image models are largely trained using self-supervised learning.
Self-supervised compared to other machine learning types
Self-supervised learning combines elements of both supervised and unsupervised learning but is distinct from semi-supervised learning:
- Supervised learning: Uses labeled data to train models for specific tasks such as classification and regression. The labels provide explicit guidance, allowing the model to make accurate predictions. Common applications include spam detection, image classification, and weather forecasting.
- Unsupervised learning: Works with unlabeled data to find patterns and groupings. It identifies clusters and associations and reduces data complexity for easier processing. Examples include customer segmentation, recommendation systems, and anomaly detection.
- Semi-supervised learning: Uses a modest amount of labeled data to provide initial guidance and then leverages one or more larger collections of unlabeled data to refine and improve the model. This approach is particularly useful when you have some labeled data, but it would be too difficult or expensive to generate enough for fully supervised learning.
- Self-supervised learning: Uses raw data to generate its own labels, allowing the model to learn from the data without any initial labeled data. This approach is especially valuable when labeled data is not available at all or is only a tiny fraction of the available data, such as with natural language processing (NLP) or image recognition.
How self-supervised learning works
Self-supervision means that the data itself provides the correct answers. The self-supervised learning process involves several steps, combining aspects of both supervised and unsupervised methods:
Data collection: Gather a large amount of raw, unlabeled data. This data forms the basis for creating pseudo-labels and training the model. Many datasets are freely available.
- Preprocessing: Prepare the data to ensure quality. This step includes removing duplicates, handling missing values, and normalizing data ranges.
- Task creation: Create puzzles for the model to solve, known as pretext tasks. These are created by removing or shuffling parts of the data, such as removing words, deleting image pixels, or shuffling video frames. Whatever existed before this intentional corruption is known as a pseudo-label: a “right answer” created from the data itself rather than from human labeling.
- Training: Train the model on the pretext tasks using the generated pseudo-labels. This means the model tries to generate the correct answer, compares its answer to the pseudo-label, adjusts, and tries again to generate the correct answer. This phase helps the model understand the relationships within the data and eventually creates a complex understanding of the relationship between inputs and outputs.
- Fine-tuning: Switch the model to learn from a smaller, labeled dataset to improve its performance on specific tasks. This step ensures the model leverages the representations learned during the initial training phase. Fine-tuning is not strictly necessary, but it typically leads to better results.
- Evaluation: Assess the model’s performance on data it hasn’t yet seen. Using standard metrics relevant to the task, such as the F1 score, this evaluation ensures that the model generalizes well to new data.
- Deployment and monitoring: Deploy the trained model in real-world applications and continuously monitor its performance. Update the model with new data as needed to maintain its accuracy and relevance.
Types of self-supervised learning
Self-supervised learning encompasses various types, each with multiple techniques and approaches. Here, we will explore several types, highlighting their unique training methods and providing one or two representative examples for each.
For images
- Self-predictive learning: Self-predictive learning involves techniques like autoencoding, where a model learns to compress information into a simpler form and then recreate the original data from it. In image processing, this often means selectively corrupting parts of an image (for instance, by masking sections) and training the model to reconstruct the original. This helps the model better recognize objects in different positions, sizes, and even when partially hidden.
- Contrastive learning: In contrastive learning, the model learns to distinguish between similar and different images by comparing them in pairs or groups. For example, the SimCLR method uses image augmentations (like cropping, distorting, and flipping) to create training pairs. Positive pairs are made by applying different changes to the same image, while negative pairs come from different images. The model then learns what features are common in similar pairs and different in dissimilar pairs.
- Clustering-based methods: Clustering-based methods group similar data points together and use these clusters as pseudo-labels for training. For instance, DeepCluster clusters images by similar features and uses these clusters to train the model. The process alternates between clustering and training until the model performs well. SwAV (Swapping Assignments Between Views) enhances this by using multiple versions of the same image to help the model learn essential features that stay constant, such as edges, textures, and object positions.
For text
- Self-predictive learning: This is the core training mechanism of large language models (LLMs), which understand text as a series of tokens. These typically represent one word but sometimes a part of a word or a cluster of words.
- Masked language models (MLMs): These are shown sentences with some tokens missing and tasked with predicting missing words. By learning how to fill in these blanks, MLMs develop a thorough representation of language structure and context, and they can consider the context of an entire input when making predictions. Useful outputs, such as sentiment analysis or named entity recognition, are developed through fine-tuning. A prime example is BERT, which Google uses to understand the intent of search queries.
- Causal language models (CLMs): Generative models such as ChatGPT, Claude, and Gemini learn to recreate text they have seen by predicting one word at a time, based on the previous tokens. Once trained, they treat input text as the context for their predictions and keep making predictions with every new token they generate. This sequential prediction is why their output appears to be typing itself out rather than appearing all at once.
- Contrastive learning: This approach compares pairs of text samples, emphasizing the differences and similarities between them. SimCSE creates two slightly different versions of the same sentence by applying dropout, which randomly ignores parts of the sentence’s representation in hidden layers during training (see more about hidden layers in our post on deep learning). The model learns to recognize these versions as similar. This technique improves the model’s ability to understand and compare sentences, making it useful for applications like finding similar sentences or retrieving relevant information for search queries.
- Next sentence prediction (NSP): As the name suggests, NSP involves predicting whether a given sentence is the subsequent sentence of another in a document, helping models understand relationships between sentences and the logical flow of text. It’s commonly used alongside an MLM to enhance its understanding of larger bodies of text. For example, in BERT NSP, the model predicts whether two sentences appear consecutively in the original text.
Applications of self-supervised learning
Self-supervised learning has a wide range of applications across various domains:
- Natural language processing: Models like BERT and GPT-3 use self-supervised learning to understand and generate human language in applications such as chatbots, translation, and text summarization.
- Computer vision: Self-supervised learning improves image and video analysis by generating pseudo-labels from raw visual data. Uses include object detection (such as on a doorbell cam), facial recognition, and automatically creating clips from longer videos.
- Speech recognition: Self-supervised models improve speech recognition systems by learning from vast amounts of unlabeled audio data. This approach reduces the need for manual transcription and improves accuracy across different accents and dialects.
- Healthcare: Self-supervised learning helps improve medical image analysis, drug discovery, and patient monitoring by leveraging large datasets with minimal labeled examples. It enhances the accuracy of disease detection and treatment recommendations without requiring extensive and expensive expert human labeling.
- Robotics: Robots use self-supervised learning to understand their environment and improve their decision-making processes. Uses include autonomous navigation, object manipulation, and human-robot interaction.
Advantages of self-supervised learning
- Cost-effective: Reduces the need for extensive labeled data, lowering annotation costs and human effort.
- Scalability: Can handle large datasets, making it suitable for real-world applications where labeled data is limited but unlabeled data is abundant.
- Generalization: When trained on enough raw data, the model can learn enough to perform new tasks even if it wasn’t trained on directly relevant data. For instance, an NLP model based on one language could be used to augment the learning of that based on another language.
- Flexibility: Adaptable to a wide variety of tasks and domains, with many subtypes available to fit particular needs.
Disadvantages of self-supervised learning
- Complexity: Creating effective pretext tasks and generating pseudo-labels requires careful design and experimentation.
- Noise sensitivity: Pseudo-labels generated from raw data might be irrelevant to the goal, potentially impacting performance by giving the model too much unnecessary input to process.
- Computational resources: Training self-supervised models, especially with large datasets, demands significant computational power and time.