Bug #10003

graphical glitches in preview

Added by Ulrich Pegelow over 5 years ago. Updated over 5 years ago.

Start date:
Due date:
% Done:


Estimated time:
Affected Version:
git development version
hardware architecture:


As already discussed on IRC there is a graphical glitch ("fireworks") in the preview/navigation window recently. Attached you find an example and here is the way to reproduce:

1) disable the exposure module
2) re-enable the exposure module
3) make a small adjustment to the EV parameter by shifting the slider

In about one out of two cases I see the graphical glitch shortly appear in the navigation window on step (3). This is with OpenCL disabled. If I enable OpenCL (both the preview and the full pixelpipe running on separate devices) I get the glitch already in steps (1) and (2).

Bisecting the issue has shown that it starts with commit 02ac9a6800435ccc25abbcd2308ad5ea28727497.

My guess: the commit changes the behavior of the preview_dirty flag. Originally that flag was initialized to 1 and only put to zero when the preview pixelpipe finally succeeded. With the a.m. commit the flag is tentatively set to zero and then set to 1 in case of a problem. IMHO this implies that there is a short period of time when the flag is zero although the data in the output buffer is not yet valid. This would be consistent with the observation that different processing units (CPU/GPU = different timings) lead to slightly different behaviors.

IMG_4956.CR2 (12.8 MB) IMG_4956.CR2 Ulrich Pegelow, 06/29/2014 11:58 AM
IMG_4956_01.CR2.xmp (7.26 KB) IMG_4956_01.CR2.xmp Ulrich Pegelow, 06/29/2014 11:58 AM


#1 Updated by Tobias Ellinghaus over 5 years ago

  • % Done changed from 0 to 20
  • Status changed from New to Triaged

That is correct, in that commit I preferred to resolve the race condition to show invalid data. The alternative was constant reprocessings which seemed worse to me. One way to fix this for real is to get rid of the race by wrapping all accesses to the flag into a mutex, however, that can lead to linearizing the work and making it pointless to have the pipe processing in a separate thread. Maybe there are other ways to fix it, I am not too comfortable with the code at hand.
My last guess about why we didn't see these problems before is that the timings of adding and scheduling a job slightly changed.

#2 Updated by Ulrich Pegelow over 5 years ago

I don't know the background of the constant reprocessing as I have not observed it yet.

I wonder if the one-flag approach of preview_dirty is too simplicistic. What are the relevant status changes we need to track? Originally we only checked: "not ready" versus "ready". The current code is more like "has been started" and "failed".

I think there are four status options:

"not running"

Does this make sense?

#3 Updated by Ulrich Pegelow over 5 years ago

I implemented something along the line of my last comment in pull request 616. It fixes the graphical glitches - hope it doesn't introduce other problems.

Tobias Ellinghaus: please review.

#4 Updated by Ulrich Pegelow over 5 years ago

  • % Done changed from 20 to 100
  • Status changed from Triaged to Fixed

Also available in: Atom PDF

Go to top