Refine drawn masks with GrabCut foreground extraction
OpenCV is an open-source C/C++ library for computer vision. It ships the GrabCut algorithm to extract the foreground of a picture with user brush strokes as input : https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html
I wonder if it would be usable as an edge-refinement tools for drawn masks in darktable.
#1 Updated by Roman Lebedev about 2 years ago
- Assignee deleted (
I think we'd rather not open the box by depending on opencv for anything.
Even though this would add a soft-ish dependency (only when creating the mask, not each time it is used),
i would still highly advise to avoid adding new dependencies like this.
Also, there was some suggestion about re-using the [not so awesome] existing code for that:
#3 Updated by Tobias Ellinghaus about 2 years ago
I understand that companies have to come up with new features constantly to justify selling a new product. But that isn't the case for darktable. So if we think that none of the AI stuff has any merit we can simply ignore it and eventually consider darktable to be feature complete. I don't expect that to happen anytime soon, I just wanted to point out the flaw in your logic that we have to follow commercial programs.
#4 Updated by Aurélien PIERRE about 2 years ago
To be clear, I'm not asking to follow trends. I'm just saying that the next step in efficient masking is machine-learning foreground extraction, the next step in spot removal is machine-learning in-painting and the next step in non-local denoising is machine learning texture extraction. Basically all the research I read these days can be summed-up by "doing a better job with machine-learning than what we did before with filters".
I understand there are some hardware limitations and software drawbacks with that, but if you want to keep improving the functionalities darktable already offers, I fear the next top-notch algorithms often relies on machine-learning and AIs.