Visual Instruction Tuning
Introducing a method for aligning large language models (LLMs) with visual information by instruction tuning on a massive dataset of image-text pairs.
Explore machine learning papers and reviews related to computer vision. Find insights, analysis, and implementation details.
Introducing a method for aligning large language models (LLMs) with visual information by instruction tuning on a massive dataset of image-text pairs.
Investigating the effectiveness of plain Vision Transformers as backbones for object detection and proposing modifications to improve their performance.
Introducing YOLO, a unified, real-time object detection system that frames object detection as a single regression problem.
Introducing EfficientNet, a family of convolutional neural networks that achieve state-of-the-art accuracy with significantly improved efficiency through a novel compound scaling method.
Introducing Faster R-CNN, a significant improvement over R-CNN and Fast R-CNN that uses a Region Proposal Network (RPN) to generate object proposals, leading to faster and more accurate object detection.
Introducing SAM (Segment Anything), a promptable segmentation model capable of segmenting any object in an image with a wide range of prompts, including points, boxes, and text.
Introducing DETR, a novel end-to-end object detection framework that leverages Transformers to directly predict a set of object bounding boxes.
Introducing BLIP-2, a new vision-language model that leverages frozen image encoders and large language models to achieve improved efficiency and performance in various multimodal tasks.
Introducing Vision Transformer (ViT), a pure transformer architecture for image recognition that achieves state-of-the-art results.
Introducing SURF (Speeded Up Robust Features), a fast and robust algorithm for local feature detection and description, often used in applications like object recognition, image registration, and 3D reconstruction.
Introducing Swin Transformer, a hierarchical Vision Transformer that uses shifted windows to achieve improved efficiency and performance in various vision tasks.
Introducing CLIP, a neural network trained on a massive dataset of image-text pairs that learns to connect images with their textual descriptions, enabling zero-shot image classification and other powerful capabilities.
Analysis of the groundbreaking ResNet architecture that enabled training of ultra-deep neural networks.