Project

General

Profile

Feature #11960

Refine drawn masks with GrabCut foreground extraction

Added by Aurélien PIERRE about 2 years ago. Updated about 2 years ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
Masks
Target version:
-
Start date:
01/23/2018
Due date:
% Done:

0%

Estimated time:
Affected Version:
git master branch
System:
all
bitness:
64-bit
hardware architecture:
amd64/x86

Description

OpenCV is an open-source C/C++ library for computer vision. It ships the GrabCut algorithm to extract the foreground of a picture with user brush strokes as input : https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html

I wonder if it would be usable as an edge-refinement tools for drawn masks in darktable.

History

#1 Updated by Roman Lebedev about 2 years ago

  • Assignee deleted (Aldric Renaudin)

I think we'd rather not open the box by depending on opencv for anything.
Even though this would add a soft-ish dependency (only when creating the mask, not each time it is used),
i would still highly advise to avoid adding new dependencies like this.

Also, there was some suggestion about re-using the [not so awesome] existing code for that:
https://www.mail-archive.com/darktable-dev@lists.darktable.org/msg02035.html

#2 Updated by Aurélien PIERRE about 2 years ago

Fair enough.

But is there any particular reason to not depend on OpenCV ? As photo editing goes (in Adobe PS/LR), all the new features come from machine learning/AIs, so I guess at some point, we will have to make the shift…

#3 Updated by Tobias Ellinghaus about 2 years ago

I understand that companies have to come up with new features constantly to justify selling a new product. But that isn't the case for darktable. So if we think that none of the AI stuff has any merit we can simply ignore it and eventually consider darktable to be feature complete. I don't expect that to happen anytime soon, I just wanted to point out the flaw in your logic that we have to follow commercial programs.

#4 Updated by Aurélien PIERRE about 2 years ago

To be clear, I'm not asking to follow trends. I'm just saying that the next step in efficient masking is machine-learning foreground extraction, the next step in spot removal is machine-learning in-painting and the next step in non-local denoising is machine learning texture extraction. Basically all the research I read these days can be summed-up by "doing a better job with machine-learning than what we did before with filters".

I understand there are some hardware limitations and software drawbacks with that, but if you want to keep improving the functionalities darktable already offers, I fear the next top-notch algorithms often relies on machine-learning and AIs.

Also available in: Atom PDF

Go to top