Project

General

Profile

Feature #12476

Priorize poor-usability before even-more-functionality (at least for a while)

Added by Vincent Fregeac 8 months ago. Updated 8 months ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
General
Target version:
-
Start date:
12/17/2018
Due date:
% Done:

0%

Estimated time:
Affected Version:
git master branch
System:
all
bitness:
64-bit
hardware architecture:
amd64/x86

Description

Darkroom functionality (number of modules and options in modules) has grown to a point where it surpasses the best closed source equivalent. On the other hand, a lot of its GUI is still in alpha state. For example, the preferences are still a long list of unrelated options without any sort of grouping - except GUI/Core but it's far for sufficient -, modules cannot be scrolled normaly, the only option being the tiny scroll bar on the right, module lists has no organisation whatsoever - like the 4 denoise module which are not even close together -, rating being forgotten in the list of criteria for collections, etc.

While functionality over usability is OK during initial development (alpha), it shouldn't be that bad in a software which pretend to be in version 2.4. Compared to any other software, the GUI really look more like 0.2.4 than 2.4.x.

A few suggestions:
- Tag tree is mandatory in any decent image management. It is implemented in the collection criteria, why not in the tagging module which is used far more often than the tagging criteria.
- Modules with same functionality should be grouped as one module. The fact that there is 4 separate denoise modules spread everywhere in the list correction list makes no sense.
- Have some logical order in the module list (at least alphabetical) so we don't have to read the entire list everytime we are looking for a module.
- Make the module list scrollable normally (scroll wheel/two finger scroll on touch pad).
- Add a film strip in lightable so we can move between photos when the lightable is set to a single photo.
- Move the zoom list in Darkroom where it make sense: close to the other options in the bottom right corner of the picture
- Make 2 (preferably 3) sets of icons (or vector icons) so they can be visible on a high dpi screen.
- Make the metadata editor configurable with the list of IPTC/XML fields we want (not every user of darktable is a beginner photographer who only needs title and description)
- Include a reverse-lookup of GPS coordinates at least for Country/State-Region/City.
- etc.
I understand the appeal of working on new, better modules rather than mundane usability improvements but a version 2.x shouldn't be such a PIA to use. Besides, there is already enough modules to get lost in the possibilities. It is about time to improve the usability of these modules and the digital asset management aspect of lightable to create a more balanced Darktable.

History

#1 Updated by Pascal Obry 8 months ago

- Have some logical order in the module list (at least alphabetical) so we don't have to read the entire list everytime we are looking for a module.

There is a logical order and one very important related to the pixelpipe.

- Modules with same functionality should be grouped as one module. The fact that there is 4 separate denoise modules spread everywhere in the list correction list makes no sense.

Makes sense to me. You don't need to use all of them. All are based on different algorithm. And again, this is because they are not applied at the same time in the pixelpipe. This is important for processing quality. Some may find this superfluous information, some other found the order a very important point when developing pictures.

- Make 2 (preferably 3) sets of icons (or vector icons) so they can be visible on a high dpi screen.

All icons are already in svg, so if a problem it is not there.

- Move the zoom list in Darkroom where it make sense: close to the other options in the bottom right corner of the picture

More sense for who? You? Others? For me this is correct next to the preview.

For the others, let's see what you'll propose as design so that you can start coding :) The metadata module is one that need a lot of rework. Let's start with this one. Thanks in advance.

#2 Updated by Vincent Fregeac 8 months ago

I have been away from coding for a very long time (20y+) so when I start coding on Darktable (I am still finding my way in the architecture and the role of each library... there's a library about aliens in DT???) it will probably be limited to very simple things like fixing spacing between a slider and a text box or things like that. I wouldn't venture myself into revamping the whole metadata/tagging thing right now because I would probably break way more than I fix.

Now, to answer some of your comment:
- Module Order: There is two way to look at how the modules should be presented. There is the programmer/mathematician way who want to present the modules according to the algorithmic constraints of the pixelpipe because that's the way it works and that's the way it should work. Then there is the photographer who doesn't care about how the tool he uses work, he only cares about his photos and his workflow based on photography logic. How the internal guts of the tool he uses work is irrelevant to a photographer; he cares about his images, not the tool or the pixelpipe. And it show in the feature requests that has been stale some since 2012.On a regular basis, people ask for a feature that will allow them to use DT according to their photographer workflow. But I haven't seen any asking for more mathematical/algorithmical logic in DT's workflow. Besides, in the original guidelines of DT, it clearly says that the target audience is the photographer, not the programmer, not the mathematician, the photographer. The good thing is, a GUI is independent of the algorithms, so the GUI can present the modules in one order, and the algorithms can process the pixels in a totally different order. And, for those who care more about algorithms and pixelpipe than photography, it only requires two additional integer fields in the module object, one for the photographer workflow order, and one for pixelpipe workflow. And an option in the preferences could allow to present the modules in the pixelpipe order if someone really wants it. But, in my opinion, the default order should be the photographic workflow (basic/global exposure/curves, fine-tuning curves, exotic fine-tuning, then creative modules). The only exception I would make is for the denoise modules, because using them at he beginning of the workflow, as a photographer would, will really slow down everything else, so it should be presented at the bottom of the list, as a hint that you should not activate these while working on the other modules.

- Module grouping: Once again, from a photographer point of view, where the different denoise is applied in the pixelpipe is irrelevant, he just want less noise. And he wants to have the choice of course, but, at the end, if the noise is gone, who cares where it happened in the pixelpipe flow (Frankly, those who really care about this should use Mathlab to process their photos, not a photographer oriented software). Also, since the ability to have multiple instances of the same module is already implemented, if someone really want to fine tune noise removal by combining different algorithms, he can select different algorithms in the same instance. The result would be a GUI which finally rejoin the initial objectives of DT: simple, clear and, above all, not distracting the photographer from his image. Hunting for a module because it's presented according to some "pixelpipe" order they have never heard of (and, once again, for those who care should use Mathlab rather than DT) is far more distracting than a frame around similar options. I mentionned the frame around options because it was ruled out in DT's GUI guidelines because it would distract too much from the image. DT is way past that sort of minor distraction today.

Icons: I have posted another issue about that because, when the GUI is scaled up (via the linux preferences), the text scales up as expected, but the icons scales down. As they didn'T change scale linearly as the text did, I thought the icon were bitmap. Well, there's definitely a problem with some icon scaling but that the subject of another issue.

Zoom: As a user, there are three things I care about the most when developping a raw image. The image itself, the options to display the image, and the modules that change the image. The image is in the center, good, the modules are all on the left, good, the options are all at the bottom-right of the image, good. No, wait, all the options except one. The zoom, which is also an option regarding the image, is beside the thumbnail. Since it's beside the thumbnail, it must zoom the thumbnail, obviously. See where I am going: the options that impact the main image should be close to the main image and grouped together. The options near the thumbnail should impact the thumbnail, not the main image. And, by the way, having a zoom which impact the thumbnail would also be nice so we can inspect part of the main image at 1:1 while keeping the main image at "fit-to-screen", but that's another feature request and, as I said, I think the focus should be on fixing the GUI before adding more feature so the thumbnail zoom can wait.

I know I'm very critical but I wouldn't be if DT was not worth it. In my opinion, and obviously not just mine, it's by far the best contender as FOSS digital photographer tool goes, despite its shortcomings regarding the GUI and DAM. So, please take my critics as a compliment rather than anything else. I wouldn't care posting issues and feature request if I didn't consider DT had the potential to become the best friend of a photographer. As for contributing, as I said, I haven't been in the developer seat for 20+ years and I didn't touch C/C++ at that time (I was a Pascal guy), so don't expect more than mundane fixes from me at first. And, by the way, I haven't find any architecture document on Github or Redmine and the code itself have very little documentation (I know, it's already hard to get documented code from paid programmers). If you could add on github a simple document that lists all the libraries with a one-liner describing their role, it would really help someone to grasp the architecture and find where are these 2-3 lines of code that would make the GUI a little bit better. For someone who already know DT's architecture by heart it may seem nothing, but DT has become big enough that going through all the code to grasp the architecture is quite a lot of work when you are looking for 2-3 lines of code that would make the GUI a little bit more refined.

#3 Updated by Aurélien PIERRE 8 months ago

- Modules with same functionality should be grouped as one module. The fact that there is 4 separate denoise modules spread everywhere in the list correction list makes no sense.

That's not possible. There is no "one-size-fits-all" denoising algorithm. You have gaussian noise, poisson noise, salt-and-pepper noise, and combinations of the 3. Each one has its own denoising method. Some apply on raw (non-demosaiced) data, some on demoisaiced data. That means they need different places in the pipe, thus different modules. The darktable philosophy is one algo = one module. That leaves things properly enclosed, it's easy to debug, and allows to just disable the culprit if some errors and slow-downs happen.

Combining them is not merely code reorganization, it's redoing the whole piping, maybe breaking backwards compatibility, and it's just not possible for the most part.

Also, some modules deal with color saturation in Lab space, in the middle of the pipe, or in RGB, at the end of the pipe. The first thing needed is to make every module Lab/RGB bilingual, then make them reorderable, then merge some of them. This is an ongoing work, and won't be done overnight. And after that, I know 3 different algos to handle saturation, each with its one corner-cases and limitations. So it's not suitable either to set for one of them arbitrarily and decide the other won't be availble for the sake of UI simplicity.

Also, the fact that most modules has 3 different version (C/SSE/OpenCL) doesn't help.

 And he wants to have the choice of course, but, at the end, if the noise is gone, who cares where it happened in the pixelpipe flow (Frankly, those who really care about this should use Mathlab to process their photos, not a photographer oriented software)

I do care. Because modules have parametric blending, so everytime you change settings on a low module, they invalidate blending on higher modules.

- Make the metadata editor configurable with the list of IPTC/XML fields we want (not every user of darktable is a beginner photographer who only needs title and description)
See :
  1. https://redmine.darktable.org/issues/11357
  2. https://redmine.darktable.org/issues/8421
  3. https://redmine.darktable.org/issues/9600
  4. https://redmine.darktable.org/issues/9600
Then there is the photographer who doesn't care about how the tool he uses work, he only cares about his photos and his workflow based on photography logic. How the internal guts of the tool he uses work is irrelevant to a photographer; he cares about his images, not the tool or the pixelpipe. 

And that is a mistake. Photo editing is 15 years behind video and 3D/CGI editing, as are the softwares, and the 3D guys care about the order of operation for good reasons (see Blender and Natron). Same in Photoshop, with the layers order. But photogs are still in this spoiled brat state of mind, where the soft should think for themselves. Well, that's not possible anymore when you do advanced stuff, you can't have advanced controls with intuive UIs. darktable has parametric blending options, which make every module closely dependant on what comes before. If you ignore this order, you are asking for trouble, and you are asking for devs to make choices for you. Thus locking you up in their mistakes. The order of operation is important, you need to pay it attention, and it's not because Lightroom has locked you out of it that it was a good idea in the first place.

Now, in dt 2.6, we have added a feature allowing to reorder the modules in the GUI, so they come in roughly the same order as the pipe, when you run the layout script.

As for contributing, as I said, I haven't been in the developer seat for 20+ years and I didn't touch C/C++ at that time (I was a Pascal guy), so don't expect more than mundane fixes from me at first. And, by the way, I haven't find any architecture document on Github or Redmine and the code itself have very little documentation (I know, it's already hard to get documented code from paid programmers)

I know. There are at least 3 different implementations of gaussian blur in the soft, I'm not sure any of their devs was aware of the 2 others or tried to take one and make it more general. And the lack of unified libs + standard APIs + doc is becoming a burden, but that's how it is for now. That would need a project governance with meetings, roadmaps, guidelines and budget. For now, everyone brings his pet project inside the core.

#4 Updated by Vincent Fregeac 8 months ago

Grouping module: The usual developer/user perception filter: Grouping modules didn't mean merging code. DT is first and foremost, for 99.9% of the people using it, a UI. So I meant grouping the denoise UI modules (or widgets in code-speak if you prefer). The reason is, despite what's happening behind the UI and when the different denoise algorithms are applied to the pixelpipe, the concern for noise will happen at one point, and only one point, in a photographer workflow: when noise start to be visible to a human eye, and before he invest more time in an image that may be ruined by a type of noise none of the algorithms can tame. So it just make sense that, at that point, the photographer can access all his options at the same place, rather than having to hunt up and down for the different algorithms, especially considering that the module list is not the most userfriendly to use (at least until it can be scrolled like anything else, with the scroll whell or two finger on a touchpad. And the pixelpipe can have a totally different flow than the UI workflow, it works perfectly, the best proof is nobody (except maybe a few DT developers) follow the pipeline order, everybody jump back and forth in the illogical order of modules and still get exactly what they want.

And that's the first difference between 3D modelling and RAW developments. Computer still don't "think" 3D models as fast as the human brain, so people have no other choice than to bend their thought process to the will of the computer is they want to be productive. On the other hand, even the most basic laptop can process a 2D still image with all the pixel manipulations imaginable in less than a second. So why become the slave of your computer if all you gain is a few milliseconds.

The second difference is the importance of the software in the process. 3D modelling doesn't exist without computers. It starts on a computer, it ends on a computer and no human could do what a computer do for 3D modelling. On the other hand, negative development existed before the computers, and worked really well. The only thing the computer brought is the democratization of custom developments. But no amount of development will make an image, while a great image will stay great even with an average development. Just look at the cost of a RAW development software compared to the cost of a typical enthousiast camera plus the lenses. There's a reason for this price difference. Now, you could argue that no photographer should use a lens without learning how polymer chemistry and quantum mechanic impact the chromatic aberration.

As for Photography software being 15 years behind, maybe that's because photographers still want to take the picture themselves... Cars should go 3 millions miles an hour for centuries on a single tank of gas by now, compared to computers, but who wants a car who goes 3 millions miles an hour on a city street? I think the main reason why photography softwares are "behind" is mostly because they already did everything a photographer needed 15 years ago. A bit more flexibility, a more intuitive UI is always welcome, but for the main part, RAW development has been fully covered a decade ago and nobody wants the cellphone camera feature in his RAW development tool.

Don't forget DT is a tool, like a hammer or a vehicule, only one of the many tool in the photographic process and not even the most important. And the same way some understand the internals of their vehicule to the point they can extract the best of it without damaging it, and if needed rebuild it entirely, others have the same passion regarding the guts of their software tool. The same way I don't force everyone how showx up on a track day to fully understand every nut and bolt of their car, don't force your own passion on everyone who use this tool. Some people just want to drive DT, because it goes well whatever you throw at it. If only there was one blinker switch to signal right and left instead of four separate blinker switches spreaded all over the dashboard, that's all they ask for.

That being said, I don't just love understanding every nut and bolt of my cars, so...

IPTC: IPTC is a different set of metadata than EXIF, although exiftool handle both. The issues you referred too are all related to the very limited set of modifiable EXIF fields, i.e. the current fields in the metadata widget. And for the typical enthousiast photographer, a title, a description and two dozen tags (that's really the most you can handle without a hierarchical interface for tags) are far from enough to find your way through the typical 10s or 100s of thousands of raw files even an amateur photographer quickly accumulate. Luckily, IPTC Core Schema (https://www.iptc.org/std/photometadata/specification/IPTC-PhotoMetadata#metadata-properties) defines a reasonable number of metadata fields, most of them relevant for an enthousiast photographer, and it also defines the XMP fields these metadata should be stored in, so there is no guess work to ensure the compatibility with image viewers or, when the image collection grows, a home-based DAM server (there's a couple of good ones that can run on a simple RaspPI but a DAM server doesn't fit in the middle of a RAW development workflow). I'm still trying to find my way in the source code, so I have no idea how to draft the code to make it compatible with DT's architecture but, in summary:
- A collection of ITPCFields struct, with "title", "value", "exiftool_field", boolean "show", int "order", int "nb_line"
- A widget creation with a loop through the IPTCFields collection, creating boxes of "nb_line" for the structs where "show"=true
- A parsing algo to populate the collection from a human-readable text file, so the ITPCField collection can be updated to the next ITPC Core Schema without modifyin the code.
- Replacing the (probably) hard-coded code populating the exported image with the EXIF fields by a loop going through the IPTCFields collection with the "exiftool_field" calls to exiftool.
- Some rework for the field list used for the collection filters since the metadata part is not an hard-coded list anymore but a collection, but I don't think it should concern more than 4-5 lines of the existing code (still have to find which library handle that part).
I don't think it would take much work to implement but I may be wrong. One the other hand, it will probably take weeks if not months to find where is should be implemented, so someone who's familiar with DT's architecture could really help (or just do it, it will probably take less time than explaining to me how and where to do it).

Last, I would probably be able to contribute to the developer's documentation quite faster than I could find my way to contribute significant code. Of course, since the documentation that really need attention is developer oriented, I would need some guidance at first to make sure I produce the type of documentation that helps developers. As for project gouvernance, it can be much simple than you think. The guidelines don't need much attention, they are still valid today even if they may have not been updated for a few years. The coding guideline is lacking a bit but I don't think efficient clean code for DT is any different than efficient clean code in other projects, so a coding guideline may not be that necessary after all. As for the roadmap, the frequency of the subjects of feature requests provides a good baseline to start a yearly discussion/review of the roadmap. The discussion can happen on the dev mailing list then, after 4-6 weeks, a core group split evenly between the top developers and the most vocal users (in term of spreading DT on Youtube, blogs, and such) votes on the main themes, and these priorities are forwarded to the triagers so they take that into account when prioritising feature requests and pull requests. Overall, it shouldn't require any pre-planned long inefficient meetings, or anything that the DT project doesn't already have, so no budget either. And, of course, none of this is enforced, it's only based on influence, so there will still be random pet projects, but they will probably end up at the bottom of the pull request pile if they don't fit with the roadmap, which should be enough of an incentive to spend time on the roadmap priorities. I don't know if it makes sense to you but that's how I "manage" the developer teams of my clients and, even though I have no actual authority, almost all of them provide me with the information I need to help them simply through influence and reasonable arguments (or maybe they just like my face, but I don't think it's that likeable it can curve the will of a team of developers ;-).

#5 Updated by Aurélien PIERRE 8 months ago

Vincent Fregeac wrote:

Grouping module: The usual developer/user perception filter: Grouping modules didn't mean merging code. DT is first and foremost, for 99.9% of the people using it, a UI. So I meant grouping the denoise UI modules (or widgets in code-speak if you prefer). The reason is, despite what's happening behind the UI and when the different denoise algorithms are applied to the pixelpipe, the concern for noise will happen at one point, and only one point, in a photographer workflow: when noise start to be visible to a human eye, and before he invest more time in an image that may be ruined by a type of noise none of the algorithms can tame. So it just make sense that, at that point, the photographer can access all his options at the same place

darktable's logic is one filter in the pipe = one module. Changing that is rewriting the whole damn software. If people want a free Lightroom, they are politely advised to crack it. I don't know what sort of photographers you are talking about, but I have been an user of darktable for 7 years and a dev of darktable for just 4 months. The UI has never been my problem, and its simplification is not my priority. It might be complicated, but it is logical. Learn it and try to understand it before asking to change everything.

People will follow the pipe order when it will be exposed in the UI. Or maybe not. Given the workforce we have now, it's not going to change anytime soon.

And that's the first difference between 3D modelling and RAW developments. Computer still don't "think" 3D models as fast as the human brain, so people have no other choice than to bend their thought process to the will of the computer is they want to be productive. On the other hand, even the most basic laptop can process a 2D still image with all the pixel manipulations imaginable in less than a second. So why become the slave of your computer if all you gain is a few milliseconds.

So you prefer to becone a slave of developpers and being locked-up in their choices rather than having full control and understanding of the pipe ?

The second difference is the importance of the software in the process. 3D modelling doesn't exist without computers. It starts on a computer, it ends on a computer and no human could do what a computer do for 3D modelling. On the other hand, negative development existed before the computers, and worked really well. The only thing the computer brought is the democratization of custom developments. But no amount of development will make an image, while a great image will stay great even with an average development. Just look at the cost of a RAW development software compared to the cost of a typical enthousiast camera plus the lenses. There's a reason for this price difference. Now, you could argue that no photographer should use a lens without learning how polymer chemistry and quantum mechanic impact the chromatic aberration.

I don't see how this is related. There is no digital photography without computers as well. The number of dimensions is anecdotal here. Computers brought a lot of things like denoising, sharpness and local enhancements based on frequential processing (FFT and such), color work, image reconstruction, etc.

As for Photography software being 15 years behind, maybe that's because photographers still want to take the picture themselves... Cars should go 3 millions miles an hour for centuries on a single tank of gas by now, compared to computers, but who wants a car who goes 3 millions miles an hour on a city street? I think the main reason why photography softwares are "behind" is mostly because they already did everything a photographer needed 15 years ago. A bit more flexibility, a more intuitive UI is always welcome, but for the main part, RAW development has been fully covered a decade ago and nobody wants the cellphone camera feature in his RAW development tool.

No. That's because photographers are too lazy to learn how image processing actually works. 3D artists have no choice. Serious retouchers as well. Photographers want to push cursors and have magic done for them. They all switched to digital following a trend and by necessity of costs reductions, so the photo industry, for the most part, is people who learned digital editing hands-on. A photographer will never need tools he is not aware of, and he will never be aware of them as long as he doesn't take a look at what cinema and 3D industry do 100× times better because, with CGI and 3D effects out-sourcing, cinema has had to think of unified worflows to allow different studios and different softwares to speak together. Photographers work wostly freelance, and are often self-taught, so mediocrity goes un-noticed and bad habits are perpetuated.

Don't forget DT is a tool, like a hammer or a vehicule, only one of the many tool in the photographic process and not even the most important. And the same way some understand the internals of their vehicule to the point they can extract the best of it without damaging it, and if needed rebuild it entirely, others have the same passion regarding the guts of their software tool. The same way I don't force everyone how showx up on a track day to fully understand every nut and bolt of their car, don't force your own passion on everyone who use this tool. Some people just want to drive DT, because it goes well whatever you throw at it. If only there was one blinker switch to signal right and left instead of four separate blinker switches spreaded all over the dashboard, that's all they ask for.

Then, some people can just subscribe to Lightroom. If you don't understand the internals, not of dt in particular, but of image processing, you will never be a good retoucher. Why should we build softwares for people who ask for mediocrity ? Shooting raw files means you want control, having control means having a lot of options and possibilities, and lots of things to learn. Once you have mastered them, you will be efficient and get good results. If someone is not willing to do this effort, he should shoot JPEGs.

IPTC: IPTC is a different set of metadata than EXIF, although exiftool handle both. The issues you referred too are all related to the very limited set of modifiable EXIF fields, i.e. the current fields in the metadata widget. And for the typical enthousiast photographer, a title, a description and two dozen tags (that's really the most you can handle without a hierarchical interface for tags) are far from enough to find your way through the typical 10s or 100s of thousands of raw files even an amateur photographer quickly accumulate. Luckily, IPTC Core Schema (https://www.iptc.org/std/photometadata/specification/IPTC-PhotoMetadata#metadata-properties)

IPTC have been merged into XMP specification if I recall correctly, and that's not the point here since all metadata are handled by the same library. So the problem sums up to adding more options in the metada handlers, which is not as simple as it sounds because that lib is a long thread of nested functions called multiple times in different contexts. I tried…

I don't think it would take much work to implement but I may be wrong. One the other hand, it will probably take weeks if not months to find where is should be implemented, so someone who's familiar with DT's architecture could really help (or just do it, it will probably take less time than explaining to me how and where to do it).

I thought so as well, tried, failed and gave up.

Last, I would probably be able to contribute to the developer's documentation quite faster than I could find my way to contribute significant code. Of course, since the documentation that really need attention is developer oriented, I would need some guidance at first to make sure I produce the type of documentation that helps developers. As for project gouvernance, it can be much simple than you think. The guidelines don't need much attention, they are still valid today even if they may have not been updated for a few years. The coding guideline is lacking a bit but I don't think efficient clean code for DT is any different than efficient clean code in other projects, so a coding guideline may not be that necessary after all. As for the roadmap, the frequency of the subjects of feature requests provides a good baseline to start a yearly discussion/review of the roadmap. The discussion can happen on the dev mailing list then, after 4-6 weeks, a core group split evenly between the top developers and the most vocal users (in term of spreading DT on Youtube, blogs, and such) votes on the main themes, and these priorities are forwarded to the triagers so they take that into account when prioritising feature requests and pull requests. Overall, it shouldn't require any pre-planned long inefficient meetings, or anything that the DT project doesn't already have, so no budget either. And, of course, none of this is enforced, it's only based on influence, so there will still be random pet projects, but they will probably end up at the bottom of the pull request pile if they don't fit with the roadmap, which should be enough of an incentive to spend time on the roadmap priorities. I don't know if it makes sense to you but that's how I "manage" the developer teams of my clients and, even though I have no actual authority, almost all of them provide me with the information I need to help them simply through influence and reasonable arguments (or maybe they just like my face, but I don't think it's that likeable it can curve the will of a team of developers ;-).

You are aware there are like 4 active devs these days, all working on their free-time, right ?

Coding guidelines are necessary. First of all, the lack of UI design and concertation has lead to the problems you mentionned, that are really not easy to fix years later. Then, there is a fair amount of duplicated code. One core algo in image processing is gaussian blur. There should be one generic lib in dt to do that (well, × (C + SSE + OpenCL)), thus one to optimize and rework every once in a while with new vectorization instructions, not 3 or 4. But to make a library a library, you need to document it, so back to the inital problem…

The core problem of dt is people who have skills have little time, people who have time lack skill, everyone has limited skills (the UI guys suck at maths, the maths guys suck at UI, the software engineers suck at image processing, and the image processing guys suck at writing fast and maintainable code, and nobody really speak or work together because schedules don't match, lack of time and several time-zones). So, at the end, we pile up stuff.

#6 Updated by Roman Lebedev 8 months ago

Vincent Fregeac wrote:

<yet another unreadable non-line-wrapped TLDR>

I think you mean well, but coming into a new project, admitting that you do not have much expirience
in neither coding nor the darktable itself, complaining/criticizing every single aspect of it,
and recommending to become non-benevolent dictator of it in the form of being a "manager" and
providing others with ordered roadmaps of the features/things you believe [how things] should be done,
is just a non-starter i think.

#7 Updated by Vincent Fregeac 8 months ago

The offer to manage roadmap/guidelines was not to "manage" as in making decisions, just to manage the process of gathering the information, formatting it, and forwarding it. As you may have noticed, I said the vote would be by the top-developpers and most proeminent users. I don't belong in any of these category. I was just offering to handle the logistic in a way that wouldn't require meetings or big budget as Aurélien feared.

As for my experience, yes, I haven't look at DT's source code before, but I've used Darktable on and off since the version 0.8 - I think, or some 0.x anyway - as well as various other RAW development and DAM tools. I have also spent quite sometimes in the history of feature requests before posting anything, to see what other DT users were asking in general and make sure I'm not asking a feature that goes against the rest of the users. So, as a user, yes, quite some experience. As a developper of DT, I admit, none, but I've been coding for 10+ years before I moved to something else.

To come back to the original subject, what I mentioned here is only what has also been requested by several users: The feature requests that have been open the longest and had the most duplicates are about the UI in general, and generally about "usability" of the darkroom. So it's not really what I believe but where what I would like meet a large portion of users who have posted similar feature requests in the past or mentioned annoying quirk of the UI when making video tutorials on DT.

Also available in: Atom PDF

Go to top