TY  - JOUR
N2  - Simulating combinations of depth-of-field and motion blur is an important factor to cinematic quality in synthetic images but can take long to compute. Splatting the point-spread function (PSF) of every pixel is general and provides high quality, but requires prohibitive compute time. We accelerate this in two steps: In a pre-process we optimize for sparse representations of the Laplacian of all possible PSFs that we call spreadlets. At runtime, spreadlets can be splat efficiently to the Laplacian of an image. Integrating this image produces the final result. Our approach scales faithfully to strong motion and large out-of-focus areas and compares favorably in speed and quality with off-line and interactive approaches. It is applicable to both synthesizing from pinhole as well as reconstructing from stochastic images, with or without layering.
ID  - discovery10051088
UR  - https://doi.org/10.1145/3197517.3201379
PB  - Association for Computing Machinery
SN  - 0730-0301
JF  - ACM Transactions on Graphics
A1  - Leimkühler, T
A1  - Seidel, H-P
A1  - Ritschel, T
TI  - Laplacian kernel splatting for efficient depth-of-field and motion blur synthesis or reconstruction
AV  - public
VL  - 37
Y1  - 2018/08//
IS  - 4
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions.
ER  -