The Computational Imaging Systems Lab
at
UC San Diego

Current areas of interest

Learning Optical Designs

What is the ideal camera for capturing optical flow? What about classifying objects? Data-driven System Design is a foundational technology of next generation imaging system design that lets us answer questions like these. By joining differentiable hardware models  and image processing, we use data to jointly optimize both optics and algorithms.

Optical Neural Imaging

Imaging fluorescence dynamics of neurons remains a tremendous challenge for optical imaging. Developing methods to improve the field of view, time resolution, depth capability, device size, and scattering resistance of imaging tools will aid neuroscientists in connecting neural dynamics to animal behavior. 

Computational Photography

Refining the designs of computational photography hardware (optics, sensors) may enable even better photography in the future.  Our lab is exploring novel designs for HDR capture, high speed, and multi-spectral sensing with applications in computational photography and potentially autonomous vehicle vision systems.

Faster IR Spectroscopy

Optical infrared spectroscopic sensing entails measuring 3 spectral variables across 2 (or 3!) spatial dimensions. Developing efficient sampling and reconstruction algorithms will greatly increase the measurement speed of this high dimensional nonlinear optical problem

Oceanographic Imaging

Monitoring the micro- and mesoscopic contents of the ocean requires sensing tiny objects across vast volumes of water. This is an exciting area for computational imaging techniques such as holography and high throughput imaging.

Differentiable Rendering

Differentiable renderers enable bringing the simulation efficiency developed in the graphics community to bear on realistic hardware simulation problems. This is an exciting area because it enables fast, accurate modeling of the outputs from optical systems, and can provide the necessary derivatives to drive end-to-end optimization.

Previous work

This system combines the pseudorandom encoding scheme of DiffuserCam with the Miniscope, replacing the tube lens with an engineered diffuser prototyped using multiphoton polymerization. The result is an inexpensive compressive imaging system that can capture fluorescent volumes with 3 micron lateral resolution and 10 micron axial at video rates with no moving parts. This compact system is well suited to a range of applications where inexpensive, compact volumetric fluorescence imaging is needed. This could range from parallel imaging in incubators where space is an issue, to head-mounting for volumetric in-vivo neuroscience. The videos above show the snapshot 3D reconstruction capability of this system. The first is a time series of a fluorescence-stained tardigrade (waterbear) acquired at 30 volumes/second. Below that is a snapshot reconstruction of neurons expressing GFP in cleared mouse brain tissue. Our system achieves a far greater axial imaging range than conventional light field approaches. The paper is now out in Nature Light Science and Applications. [Project page]

This project demonstrates the innate compressive video properties of DiffuserCam. Because image sensor chips have a finite ADC bandwidth, recording video typically requires a trade-off between frame rate and pixel count. Compressed sensing techniques can circumvent this trade-off by assuming that the image is compressible. Here, we propose using multiplexing optics to spatially compress the scene, enabling information about the whole scene to be sampled from a row of sensor pixels, which can be read off quickly via a rolling shutter CMOS sensor. Conveniently, such multiplexing can be achieved with a simple lensless, diffuser-based imaging system. Using sparse recovery methods, we are able to recover 140 video frames at over 4,500 frames per second, all from a single captured image with a rolling shutter sensor. Our proof-of-concept system uses easily-fabricated diffusers paired with an off-the-shelf sensor. The resulting prototype enables compressive encoding of high frame rate video into a single rolling shutter exposure, and exceeds the sampling-limited performance of an equivalent global shutter system for sufficiently sparse objects. Best paper at ICCP 2019. [IEEE][ArXiV]

We demonstrate a compact, easy-to-build computational camera for single-shot three-dimensional (3D) imaging. Our lensless system consists solely of a diffuser placed in front of an image sensor. Every point within the volumetric field-of-view projects a unique pseudorandom pattern of caustics on the sensor. By using a physical approximation and simple calibration scheme, we solve the large-scale inverse problem in a computationally efficient way. The caustic patterns enable compressed sensing, which exploits sparsity in the sample to solve for more 3D voxels than pixels on the 2D sensor. Our 3D reconstruction grid is chosen to match the experimentally measured two-point optical resolution, resulting in 100 million voxels being reconstructed from a single 1.3 megapixel image. However, the effective resolution varies significantly with scene content. Because this effect is common to a wide range of computational cameras, we provide a new theory for analyzing resolution in such systems (Best demo, ICCP 2017) [OSA] [ArXiV] [DIY tutorial]


We capture 4D light field data in a single 2D sensor image by encoding spatio-angular information into a speckle field (caustic pattern) through a phase diffuser. Using wave-optics theory and a coherent phase retrieval method, we calibrate the system by measuring the diffuser surface height from through-focus images. Wave-optics theory further informs the design of system geometry such that a purely additive ray-optics model is valid. Light field reconstruction is done using sparsity-constrained iterative inverse methods. We demonstrate a prototype system and present empirical results of 4D light field reconstruction and computational refocusing from a single diffuser-encoded 2D image. Best paper at ICCP 2016. [IEEE]