Skip to content

Face segmentation with YOLO on non labelled dataset with CLIP, Grounding DINO and Grounding SAM

License

Notifications You must be signed in to change notification settings

mikzarjr/FaceSegmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FaceSeg

Note

(Currently in progress)

The next version of CLIP-DINO-SAM combination will come out soon!📆

Tip

📄 Paper with detailed explanation of the structure of the combination of CLIP-DINO-SAM models: PDF

:octocat: Github with detailed workflow of labelling data with CLIP-DINO-SAM for YOLO: Github

👀 Example Output

Here are example predictions of YOLO model segmenting parts of face after being trained on an auto-labeled dataset using CLIP-DINO-SAM

📚 Basic Concepts

CLIP-DINO-SAM combination is a Huge module that works relatively not quickly as it requires relatively Big ammounts of GPU. So i will show you a detailed workthroug for only two images to save your time on waiting for the results and my time on writing this tutorial. For the most curious ones i will leave a complete pipeline for training on custom face dataset. Enjoy 🎉

💿 Installation

Clone repo

git clone https://github.com/Mikzarjr/Face-Segmentation

Install requirements

pip install -r FaceSeg/requirements.txt

or

pip install -e setup.py

🚀 qwe

qweqweqwe

📑 Workthrough

Segmentation with CLIP-DINO-SAM only 🎨

Import dependencies

from FaceSegmentation.Pipeline.Config import *
from FaceSegmentation.Pipeline.Segmentation import FaceSeg

Choose image to test the framework

sample images are located in FaceSeg/TestImages

image_path = f"{IMGS_DIR}/img1.jpeg"

Run the following cell to get segmentation masks

Main segmentation mask is located in /segmentation/combined_masks

All separate masks are located in /segmentation/split_masks

S = FaceSeg(image_path)
S.Segment()

Annotations for training YOLO 📝

Create COCO.json annotations

from FaceSegmentation.Pipeline.Annotator import CreateJson
image_path = "/content/segmentation/img1/img1.jpg"
A = CreateJson(image_path)
A.CreateJsonAnnotation()
A.CheckJson()

Output will be in COCO_DIR named COCO.json

Convert COCO.json annotations to YOLOv8 txt annotatoins

from FaceSegmentation.Pipeline.Converter import COCO-to-YOLO
json_path = f"{COCO_DIR}/COCO.json"
C = ConvertCtY(image_path)
C.Convert()

Output will be in YOLO_DIR named YOLO.json

About

Face segmentation with YOLO on non labelled dataset with CLIP, Grounding DINO and Grounding SAM

Resources

License

Stars

Watchers

Forks

Languages