Rescan updated directories
- see 'skulls'
- don't see images which have been added to a folder after importing it to DT
This can only be solved by deleting the directory from DT and readding it manually.
It should be possible to rescan a folder (and its subfolders) from DT's UI.
A good place for the rescan option would be the folder's context menu (in the /folders/ view of the /collect images/ module).
A script to purge non existing images can already be found here:
#1 Updated by Jesko N over 4 years ago
Additionally: After rescanning a directory all empty film roles in this directory (or its subdirectories) should be removed from the DB as well. (Currently empty film roles are displayed crossed out in the folders view and it's even impossible to remove them manually by clicking the "remove" option of the context menu of a folder in the folders view)
#4 Updated by Joel T almost 4 years ago
Could the behavior of purge_non_existing_images.sh be incorporated into re-importing a folder (but only for that folder)?
I use darktable on two machines, both of which access the same images from a common nfs share. In order to refresh changes from the other machine, I have to:
1. Purge missing images
a. Quite darktable
b. Run purge_non_existing_images.sh (open terminal, et)
c. Re-open darktable
2. Refresh changes, add new images
a. Re-import folder
Instead, the desired behavior should be:
1. Refresh changes, add new images, purge missing images
a. Re-import folder
#5 Updated by Tobias Ellinghaus almost 4 years ago
- bitness set to 64-bit
- System set to all
- Affected Version set to git development version
- Category set to Lighttable
I guess that could be added to the crawler we have in master. It already looks for updated XMP files, checking the raw files should be possible, too. That way it would give you a list with all images that got removed (*) on startup, no need to reimport anything.
Would that help?
(*) It will however not look for newly added images.
#6 Updated by Joel T almost 4 years ago
Yes, that would be perfect!
Actually, one thought: would that delay program startup? Hopefully the master crawler runs in parallel and therefore would not increase the start time.
I have a relatively small collection, but already the purge script takes ~10 seconds...
[joel@manjaro ~]$ time ./purge_non_existing_images.sh
#7 Updated by Tobias Ellinghaus almost 4 years ago
- % Done changed from 0 to 20
- Target version set to Future
- Status changed from New to Triaged
- Assignee set to Tobias Ellinghaus
No, it doesn't run in parallel, since subsequent parts of initializing darktable have to run after the crawler is done. If you are on git master you can just enable it in preferences and see for yourself how long it takes.
#8 Updated by Joel T almost 4 years ago
Sorry, I haven't had a chance to test this out since I am not currently on git master.
I appreciate your efforts on this, and I think the best place for it would be on Import -> Folder instead of on startup.
What if a user's drive is temporarily disconnected (network, USB unplugged, etc)? In that case, simply opening Darktable could purge quite a lot unexpectedly.
However, if a user tries to (re-)import a folder, then it would be clear if the folder was not available, and it would also be expected that the folder contents should be refreshed.
Tobias Ellinghaus wrote:
No need to remove a filmroll, just import it again to have it being updated. And empty filmrolls should be removed, IIRC there was a bug a (long) while ago that didn't do that in some cases, but that should be fixed.
I'm trying to switch to running on Linux with the equivalents I was running on Windows, and finding some alternatives to fit when there isn't a linux version of what I was using. I'm kinda shocked that this feature was requested 4 years ago, and still remains in an issue queue. In my case, it's not a hindrance but a major issue.
Some facts to illustrate...
I organize my digital photos by camera/camera-body, and then by capture date. I currently have 5 camera/body folders, so that means there's a need to have an upper-level folder/filmroll for each one, with the (sometimes, daily) additions of dated sub-folders under those. To give you an idea of the numbers, these are the stats on importing only one of those upper folders.
Image files: Sony NEX-F3 .ARW raw - 16MP at 4912x3264 pixels, as 17.2MB files
Imported images: 3924 (raw only, no jpeg), totaling 63.0 GB
Number of sub-folders: 228
Start time: 11:20am
Finish time: 12:33pm
My linux box, running darktable (latest):
FX-8300 8-core AMD @ 3.3Mhz w/8GB 1866Mhz CL9 memory, and CentOS 7
Obviously it's not a hardware issue for processing speed, but quite an amount of time to do a full import. From what I've read and tried up to now and for how I'd have to work - I'm better off dumping the last import and redoing it rather than trying to manipulate the database to keep it current. Imagine if I had to multiply that import time by a factor of 5 for all the other folders!
It would be incredibly unlikely that I'd try to manage files within darktable, given the file structure I have in place and the number of files I'd be dealing with. I would have to have a spreedsheet or a notebook of typed/handwritten notes or checklists to figure out what I'd need to prune and/or add - regardless of whether it gets done in batches or as one-at-a-time selections. Remember, there's no descriptive names on in-camera generated files either. It would add an immense amount of time to the workflow just for file management in getting a valid list of populated folders ready and/or updated.
Lightroom, DxO, Capture 1, the OEM stuff from Nikon/Canon/Sony, etc., all have the same general ability to sync folders, and there's a good reason they all have it as a default function. It just simply works for anyone using photo tools when their image content is beyond a few hundred photos in total.
Considering that I'm no longer a pro and back to just shooting for myself, I'm now a lightweight when it comes to the number of files I'd generate daily. I'm not even counting any film frames I'd convert to digital either, but that would mean another chunk of time to add to workflow. Unfortunately I can't manufacture any more time, so I have to find solutions that don't require me to spend too much of what there is for extra work that's trivial stuff for other software.
I think some type of syncing feature is not only sorely needed, it needs a higher priority than being 4 years on the maybe/to-do list. It looks like I'll still have to run a virtual copy of Windows and use what I've always run.