Meta Releases Open-Source "Segment Anything" 2.0 Model, Now Capable of Video Segmentation
Meta unveils Segment Anything Model 2 (SAM 2), providing real-time object segmentation for both static images and dynamic videos. Now open-source and faster than ever.
Remember Meta's "Segment Anything Model"? It was released in April last year and was seen as groundbreaking research in traditional CV tasks.
Now, over a year later, Meta has announced at SIGGRAPH the launch of Segment Anything Model 2 (SAM 2).
Building on its predecessor, SAM 2 represents a major advancement in the field. It provides real-time, promptable object segmentation for both static images and dynamic videos, unifying image and video segmentation into one powerful system.
SAM 2 can segment any object in any video or image, even those it hasn't seen before, supporting various use cases without custom adaptations.
In a conversation with Jensen Huang, Mark Zuckerberg mentioned SAM 2: "Being able to do this in video, with zero-shot capabilities, and tell it what you want is very cool."