EECS 397/497: Computational Photography Seminar

Spring 2018 Mon 3-6pm - Professor Oliver (Ollie) Cossairt

 

 

The Lytro Camera captures a 4D light field of a scene, enabling photographs to be digitally refocused after images are captured.

 

 

Refractive shape from light field distortion using a light field probe. From left to right, the probe image, extracted normals, and estimated 3D surface

 

 

A transient image showing Light in flight captured. A disco ball with many mirrored facets. when illuminated from the left, colored according to the time offset of the main intensity peak.

 

Course Goals

To teach the state-of-the art in computational photography, including computational cameras, computational lighting, and computational displays.  Students will read and present 2-4 papers in a number of current topics in the field. There will be a final project where students will have a chance to implement a research project of their choosing. The goal of the project will be to synthesize the concepts learned in the course to produce novel imaging systems with new functionality.

Course Description

This course is second in a two-part series that explores the emerging new field of Computational Photography. Computational photography combines ideas in computer vision, computer graphics, and image processing to overcome limitations in image quality such as resolution, dynamic range, and defocus/motion blur. This course will first cover state-of-the-art topics in computational photography such as motion/defocus deblurring cameras, light field cameras, computational displays, and much more!

Course assignments will consist of 2-4 paper presentations and a final project. There will be no midterm or final exam. Students will choose 2-4 papers to read for the list of topics on the course webpage. For each of these papers a brief 15-20 minute presentation will be given in class, explaining the core idea and technical novelty. A discussion will then follow on how the technique relates to other recent work in the field. After the presentation, each student will submit a Paper Review Form.

Final Projects

For the final project, students will have an opportunity to implement a project on their own or in teams. The project may include some camera, lighting/projector, optics, or image processing development. Resources will be provided to assist students in their research projects (e.g. SLR cameras, lenses, light field cameras, projectors, etc.). The project may be to reproduce results from one of the papers discussed in class, or it may be to do something entirely different.

Project proposals will be due Monday May 6, a short milestone report will be due Monday May 21, and the final results for the project will be presented as a poster on the final day of class on June 4.

You can find a list of project ideas on the Project Ideas Webpage.

What to submit:

The final project will be 40% of your final grade in the course. The final project will consist of three parts:

1)   Project Proposal (30% project grade)

The project proposal needs to state clearly what the goal is of your project, your objectives over the remaining 5 weeks of the course, and how you intend to achieve them. Your proposal must include a set of evaluation criteria that will be used to determine if your project was successful.  The proposal should be 1-2 pages and include the following:

_      Motivation for proposed research

_      Related work

_      Proposed technical approach

_      Timeline of work for remainder of quarter

_      Goals for work completed by intermediate milestone and project end

_      A set of 2-5 evaluation criteria for assessing the success of your project, according to the established goals of the project

2) Intermediate Milestone (20% project grade)

The intermediate milestone should contain a short, one paragraph description of the accomplishments made so far. It should be clearly stated if the work done to date has met the goals set forth as stated in your project report. Include any pictures, simulations, or other documentation of the work completed so far in the document.

3) Project Poster (50% project grade)

Your project poster should be similar to a poster presented at a conference. It should explain the motivation for your project, related work, your technical approach, and present your results, all in a format that is easy to interpret visually. The poster should have all the information necessary to convey the research problem you worked on, and the accomplishments you made for the project. If you can demonstrate your work by bringing any hardware/software along to the poster session, you should do so as well. 

Literature Survey Option:

Students who do not wish to implement a project may alternatively read an additional 3 papers from the course website not presented in class. A presentation summarizing these papers will be given on the final day of class, and a written 6-8 page report (total, for all papers) will be submitted at the last class.

Prerequisites

This course will be seminar, offered to all students with knowledge in any of the three core areas: computer vision, computer graphics, or photography.  If you are interested, please contact the instructor to discuss!

Topics Covered

_       Week 1: Introduction to Computational Photography

_       Week 2: Defocus, Depth of Field and Coded Aperture

_       Week 3: Motion and Video Processing

_       Week 4: Light Field Imaging

_       Week 5: Computational Displays

_       Week 6: Light Transport Acquisition and Processing

_       Week 7: Structured Illumination and 3D Capture

_       Week 8: Time-of-Flight Imaging and Looking Around Corners

_       Week 9: Compressive Imaging

_       Week 10: Final Projects

 

Grading

·       Presentations and Discussions - 50% - Some simple tips about giving presentations

·       Paper Review 10%

·       Final Project 40%

 

Texts

There will be no text, readings will be posted on the course website

Course Instructor

Oliver (Ollie) Cossairt, Rm 3-211 Ford Design Center, 2133 Sheridan Road, Evanston, IL 60208. Ollie@eecs.northwestern.edu Office: (847) 491-0895.

Syllabus and List of Papers

Tuesday April 3, Week 1 - Introduction

Monday April 9, Week 2 – Defocus, Depth of Field, and Coded Aperture

Presentations:

_       Image and depth from a conventional camera with a coded aperture,  A. Levin, R. Fergus, F. Durand, and W. T. Freeman,  Proc ACM Siggraph 2007

_       Diffusion Coded Photography for Extended Depth of Field,   O. Cossairt, C. Zhou, and S. K. Nayar,  Proc. ACM Siggraph 2010

_       Confocal Stereo,  Samuel W. Hasinoff and Kiriakos N. Kutulakos, International Journal of Computer Vision, 2009.

_       FlatCam: Thin, bare-sensor cameras using coded aperture and computation, M. Salman Asif, Ali Ayremlou, Aswin Sankaranarayanan, Ashok Veeraraghavan, and Richard Baraniuk, IEEE TCI, 2017.

_       DiffuserCam: lensless single-exposure 3D imaging, Nick Antipa, Grace Kuo, Reinhard Heckel, Ben Mildenhall, Emrah Bostan, Ren Ng, and Laura Waller, Optica 2017.

Background Readings:

_       Lensless Imaging: A computational renaissance", Vivek Boominathan, Jesse K Adams, M Salman Asif, Benjamin W Avants, Jacob T Robinson, Richard G Baraniuk, Aswin C Sankaranarayanan, Ashok Veeraraghavan, IEEE Signal Processing Magazine, 2016.

_       Flexible Depth of Field Photography,   S. Kuthirummal, H. Nagahara, C. Zhou, and S.K. Nayar,  IEEE Trans. PAMI, vol. 33, no. 1, pp. 58-71, 2011

_       Extracting Depth and Matte using a Color-Filtered Aperture. Yosuke Bando, Bing-Yu Chen, and Tomoyuki Nishita. SIGGRAPH Asia 2008

_       New paradigm for imaging systems,   W. T. Cathey and E. R. Dowski,  Applied Optics, vol. 41, pp. 6080-6092, 2002

_       Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring, Changyin Zhou, Stephen Lin, and Shree Nayar, International Journal of Computer Vision (IJCV), Volume 93, Number 1, Pages 53-72, 2011

_       What are Good Apertures for Defocus Deblurring?, Changyin Zhou and Shree Nayar, ICCP 2009 (oral presentation)

_       Depth and Deblurring from a Spectrally-varying Depth-of-Field, Ayan Chakrabarti and Todd Zickler, Proceedings of the European Conference on Computer Vision, 2012

Monday April 16, Week 3 - Motion and Video Processing

Presentations:

_       Coded Exposure Photography: Motion Deblurring using Fluttered Shutter, Ramesh Raskar, Amit Agrawal, Jack Tumblin, ACM SIGGRAPH 2006.

_       CoLux: Multi-Object 3D Micro-Motion Analysis Using Speckle Imaging, Brandon M. Smith, Pratham Desai, Vishal Agarwal, Mohit Gupta , Siggraph 2017.

_       Eulerian Video Magnification for Revealing Subtle Changes in the World, Hao-Yu Wu, M. Rubinstein, Eugene Shih, John Guttag, Frdo Durand, William T. Freeman, ACM SIGGRAPH 2012

_       Direct face detection and video reconstruction from event cameras, S. Barua, Y. Miyatani and A. Veeraraghavan, IEEE WACV, 2016.

_       DistancePPG: Robust non-contact vital signs monitoring using a camera, Mayank Kumar, Ashok Veeraraghavan and Ashu Sabharwal, Biomedical Optics Express, 2016.

Background Readings:

_       Motion Invariant Photography, Levin et al. ACM SIGGRAPH 2008.

_       Dark Flash Photography, Dilip Krishnan, Rob Fergus, ACM SIGGRAPH 2009

_       Motion Invariant Photography, Levin et al. ACM SIGGRAPH 2008.

_       Image Deblurring using Inertial Measurement Sensors, Neel Joshi, Sing Bing Kang, C. Lawrence Zitnick, Richard Szeliski, ACM SIGGRAPH 2010

_       Image Deblurring with Blurred/Noisy Image Pairs, Lu Yuan, Jian Sun, Long Quan, Heung-Yeung Shum, ACM SIGGRAPH 2007

_       Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis. Yosuke Bando, Henry Holtzman, and Ramesh Raskar, ToG 2013 (presented at SIGGRAPH 2013).

_       When does Computational Imaging Improve Performance? Oliver Cossairt, Mohit Gupta, Shree Nayar, Transactions on Image Processing (2012).

Monday April 23, Week 4  - Light Field Imaging

Presentations:

_       Light field microscopy , M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, Proc. ACM Siggraph 2006.

_       Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array Zachary F. Phillips, Michael V. D'Ambrosio, Lei Tian, Jared J. Rulison, Hurshal S. Patel, Nitin Sadras, Aditya V. Gande, Neil A. Switz, Daniel A. Fletcher, Laura Waller , PLOS ONE 2016.

_       Unstructured Light Fields, Abe Davis, Marc Levoy, Fredo Durand, Eurographics 2012.

_       Shield Fields: Modeling and Capturing 3D Occluders, Douglas Lanman, Ramesh Raskar, Amit Agarwal, Gabriel Taubin, Siggraph Asia 2008.

_       SAVI: Synthetic Apertures for Long-Range, Sub-Diffraction Limited Visible Imaging Using Fourier Ptychography, Jason Holloway, Yicheng Wu, Manoj Kumar Sharma, Oliver Cossairt, and Ashok Veeraraghavan. Science Advances, 2017.

Background Readings:

_       Light Field Rendering, M. Levoy and P. Hanrahan. ACM SIGGRAPH, pp. 31-42, 1996.

_       High Performance Imaging Using Large Camera Arrays, B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E Antunez, A. Barth, A. Adams, M. Horowitz and M. Levoy. ACM SIGGRAPH, pp. 765-776, 2005.

_       Refractive Shape from Light Field Distortion, G. Wetzstein, W. Heidrich, R. Raskar, ICCV 2011

_       The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,  T. E. Bishop and P. Favaro,  IEEE Trans. PAMI, v.32, n.5, 2012

_       Light Field Photography with a Hand-Held Plenoptic Camera, Ng et al., Stanford Tech. Report, 2005.

_       Recording and controlling the 4D light field in a microscope, Marc Levoy, Zhengyun Zhang, Ian McDowall, Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162.

_       Single Lens 3D-Camera with Extended Depth-of-Field, C. Perwas and L. Wietzke, Proc SPIE: Human Vision and Electronic Imaging, vol.8291, pp.1-15, 2012 

_       Programmable aperture photography: multiplexed light field acquisition, C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen,  Proc. ACM Siggraph 2008

_       Light Field Video Stabilization Brandon Smith, Li Zhang, Hailin Jin, Aseem Agarwala, IEEE ICCV 2009

_       Computational Cameras: Convergence of Optics and Processing, C. Zhou and S. K. Nayar, IEEE Trans. Image Processing, v.20, n.12, 2011

Monday April 30, Week 5 - Computational Displays

Presentations:

_       Hand-Held Schlieren Photography with Light Field Probes, G. Wetzstein, R. Raskar, W. Heidrich, International Conference of Computational Photography (ICCP) 2011.

_       Goal-based Caustics. Marios Papas, Wojciech Jarosz, Wenzel JakobSzymon RusinkiewiczWojciech Matusik, Tim Weyrich, Computer Graphics Forum, 2011.

_        

_       Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays, G. Wetzstein, D. Lanman, W. Heidrich, R. Raskar, ACM Transactions on Graphics (Proc. Siggraph) 2011.

_       Color Contoning For 3D Printing, Vahid Babaei, Kiril Vidimče, Michael Foshey, Alexandre Kaspar, Piotr Didyk, Wojciech Matusik, ACM Transactions on Graphics (SIGGRAPH 2017)

_       Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays, F. Huang and G. Wetzstein and B. Barsky and R. Raskar, ACM SIGGRAPH 2014.

_       Focal Surface DisplaysN. Matsuda, A. Fix, D. Lanman,  ACM Trans. Graph. 2017.

Background Readings:

_       NETRA: Interactive Display for Estimating Refractive Errors and Focal Range, Vitor F. Pamplona, Ankit Mohan, Manuel M. Oliveira, Ramesh Raskar, ACM SIGGRAPH 2010.

_       Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization, Douglas Lanman, Matthew Hirsch, Yunhee Kim, Ramesh Raskar. ACM Transactions on Graphics, 2010.

_       Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs, D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, R. Raskar, ACM Transactions on Graphics (Proc. Siggraph Asia) 2011

_       Towards Passive 6D Reflectance Displays, M. Fuchs, R. Raskar, H.P. Siedel, H. A Lensch,Proc. of SIGGRAPH 2012 (ACM Transactions on Graphics 31, 4), 2008.

_       Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting. G. Wetzstein, D. Lanman, M. Hirsch, R. Raskar. Proc. of SIGGRAPH 2012 (ACM Transactions on Graphics 31, 4), 2012.

_       Lighting Sensitive Display, Shree Nayar, Peter Belhumeur and Terry Boult, ACM Transactions on Graphics, October 2004

_       Rendering for an Interactive 360 Light Field Display, Andrew Jones, Ian McDowall, Hideshi Yamada, Mark Bolas, Paul Debevec, ACM SIGGRAPH 2007

Monday May 7, Week 6 - Light Transport Acquisition and Processing

Presentations:

_       Dual Photography, Pradeep Sen, Billy Chen, Gaurav Garg, Steve Marschner, Mark Horowitz, Marc Levoy, Hendrik P. A. Lensch, ACM SIGGRAPH, 2005.

_       Synthetic Aperture Confocal Imaging, Marc Levoy, Billy Chen, Vaibhav Vaish, Mark Horowitz, Ian McDowall, Mark Bolas, ACM SIGGRAPH, 2005.

_       Fast Separation of Direct and Global Components of a Scene Using High Frequency Illumination, Shree K. Nayar, Gurunandan Krishnan, Michael D. Grossberg, Ramesh Raskar, ACM SIGGRAPH, 2006.

_       Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport Matthew O'Toole, Felix Heide, Lei Xiao, Matthias B. Hullin, Wolfgang Heidrich, and Kiriakos N. Kutulakos. ACM SIGGRAPH, 2014.

_       Optical Computing for Fast Light Transport Analysis, Matthew O'Toole and Kiriakos N. Kutulakos. ACM SIGGRAPH Asia, 2010.

Background Readings:

_       A theory of inverse light transport, S. M. Seitz, Y. Matsushita, K. N. Kutulakos, Proc. ICCV 2005, pp. 1440-1447

_       Primal-Dual Coding to Probe Light Transport , Matt OՔoole, Ramesh Raskar and Kyros Kutulakos, ACM SIGGRAPH 2012

_       Helmholtz Stereopsis: Exploiting Reciprocity for Surface Reconstruction, T. Zickler, et al. IJCV 2002.

Monday May 14, Week 7 - Structured Illumination and 3D Capture

Presentations:

_       Microgeometry Capture using an Elastomeric Sensor, Micah K. Johnson, Forrester Cole, Alvin Raj, and Edward H. Adelson. (ACM SIGGRAPH, 2011)

_       Motion-Aware Structured Light Using Spatio-Temporal Decodable Patterns, Yuichi Taguchi, Amit Agrawal, and Oncel Tuzel , ECCV 2012

_       Outdoor Kinect: Structured Light in Sunlight, Mohit Gupta, Qi Yin and Shree Nayar, ICCV 2013

_       Structured Light in Global Illumination, Mohit Gupta, A. Agrawal, A. Veeraraghavan and Srinivasa Narasimhan, IJCV 2013.

_       MC3D: Motion Contrast 3D Laser Scanner, N. Matsuda, M. Gupta, O. Cossairt. ICCP 2016.

Background Readings:

_       Diffuse Structured Light , Shree Nayar, Mohit Gupta, IEEE ICCP 2012

_       Spacetime Stereo: Shape Recovery for Dynamic Scenes, L. Zhang, B. Curless, and S. M. Seitz. CVPR, Vol. 2, pp. 367-374, 2003.

_       Micro Phase Shifting , Mohit Gupta, Shree Nayar, IEEE CVPR 2012

_       Projection Defocus Analysis for Scene Capture and Image Display, Li Zhang, Shree K. Nayar, ACM ToG (also Proc. of ACM SIGGRAPH), Jul, 2006.

_       3D Photography on your desk, J.-Y. Bouguet and P Perona. IEEE ICCV, pp. 43-50, 1998.

_       High-Resolution, Real-time 3D Shape Acquisition, Song Zhang; Peisen Huang; CVPR 2004

Monday May 21, Week 8 – Time-of-Flight Imaging and Looking around corners

Presentations:

_       Phasor Imaging: A Generalization Of Correlation-Based Time-of-Flight Imaging," M. Gupta, S.K. Nayar, M. Hulling and J. Martin, , ACM Trans. on Graphics, 2015.

_       Epipolar Time-of-Flight Imaging, Supreeth Achar, Joseph R. Bartels, William L. ‘Red’ Whittaker, Kiriakos N. Kutulakos, Srinivasa G. Narasimhan. ACM SIGGRAPH 2017.

_       Doppler Time-of-Flight Imaging, Felix Heide, Wolfgang Heidrich, Matthias B. Hullin, Gordon Wetzstein , SIGGRAPH 2015.

_       Femto-Photography - Capturing and Visualizing the Propagation of Light, A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, R. Raskar, ACM SIGGRAPH 2013.

_       Low-budget Transient Imaging using Photonic Mixer Devices, F. Heide, M. Hullin, J. Gregson, W. Heidrich, ACM Trans. Graphics (Proc. Siggraph), 2013.

_       Recovering 3D Shape around a Corner using Ultra-Fast Time-of-Flight Imaging, Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, Nature Communications, March 2012.

Background Readings:

_       Turning Corners into Cameras: Principles and Methods, L. Bouman, Katherine & Ye, Vickie & B. Yedidia, Adam & Durand, Fredo & W. Wornell, Gregory & Torralba, Antonio & T. Freeman, William, ICCV 2017.

_       Confocal non-line-of-sight imaging based on the light-cone transform, O’Toole, Matthew and Lindell, David B. and Wetzstein, Gordon, Nature, 2018.

_       Diffuse Mirrors: ''Looking Around Corners'' Using Inexpensive Time-of-Flight Sensors, Felix Heide, Lei Xiao, Wolfgang Heidrich, Matthias Hullin, CVPR 2014. 

_       Tracking objects outside the line of sight using 2D intensity images, Jonathan Klein, Christoph Peters, Jaime Martín, Martin Laurenzis and Matthias Hullin, Scientific Reports, 2016.

 

Monday May 28, Week 9 – Compressive Imaging

Presentations:

_       LiSens — A Scalable Architecture for Video Compressive Sensing , Jian Wang,Mohit Gupta,and Aswin C. Sankaranarayanan, ICCP 2015.

_       Flutter Shutter Video Camera for Compressive Sensing of Videos, Holloway, Jason, et al. Computational ICCP, 2012.

_       Video from a Single Exposure Coded Photograph using a Learned Over-Complete Dictionary, Yasunobu Hitomi, Jinwei Gu, Mohit Gupta, Tomoo Mitsunaga, Shree K. Nayar, IEEE ICCV 2011

_       Compressive Epsilon Photography for Post-Capture Control in Digital Imaging, Atsushi Ito, Salil Tambe, Kaushik Mitra, Aswin C. Sankaranarayanan, Ashok Veeraraghavan, in ACM SIGGRAPH, 2014.

_       Learning to Synthesize a 4D RGBD Light Field from a Single Image, Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, Ren Ng, ICCV 2017.

Background Readings:

_       Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections. Kshitij Marwah, Gordon Wetzstein, Yosuke Bando, and Ramesh Raskar. SIGGRAPH 2013.

_       Single-pixel imaging via compressive sampling, Marco F. Duarte ; Mark A. Davenport ; Dharmpal Takhar ; Jason N. Laska ; Ting Sun ; Kevin F. Kelly ; Richard G. Baraniuk, IEEE Signal Processing Magazine, 2008.

_       Coded Strobing Photography: Compressive Sensing of High Speed Periodic Videos, Veeraraghavan, A., Reddy, D., Raskar, R., IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011

_       P2C2: Programmable Pixel Compressive Camera for High Speed Imaging. Reddy, D., Veeraraghavan, A., Chellappa, R., IEEE Conference on Computer Vision and Pattern Recognition, 2011.

_       Compressive Structured Light for Recovering Inhomogenous Participating Media, J. Gu, Shree Nayar, E. Grinspun, P. Belhumeur, R. Ramamoorthi, ECCV 2008

_       Video rate spectral imaging using a coded aperture snapshot spectral imager, Ashwin Wagadarikar, Nikos Pitsianis, Xiaobai Sun, and David Brady, Optics Express, 17 (8) (2009)

_       High-resolution Hyperspectral Imaging via Matrix Factorization, Rei KawakamiJohn WrightYu-Wing TaiYasuyuki MatsushitaMoshe Ben-Ezra, and Katsushi Ikeuchi, IEEE CVPR 2011.

_       Coded aperture compressive temporal imaging, P Llull, X Liao, X Yuan, J Yang, D Kittle, L Carin, G Sapiro, DJ Brady, Optics express 21 (9), 10526-10545

 

Monday June 4, Week 10 Project Presentations