Meta Unveils SAM 3: A Breakthrough Multimodal Segmentation Model Bridging Language and Vision
Meta has launched the third iteration of its Segment Anything Model (SAM 3), introducing an open-vocabulary system that advances image and video understanding by integrating language and vision through a novel human-AI collaborative training approach.
