2012-06-23

The AWARE2 Gigapixel Camera

The Gigapixel Race Begins? | Compound Eye, Scientific American Blog Network

So your showoff neighbor brings home a new 36 megapixel Nikon SLR, and your previously top-of-the-line 18 megapixel gadget starts to seem… inadequate. The insolence! The injustice! What can you buy to put that jerk in his place?
How about raising the stakes an order of magnitude with a 960 megapixel supercamera?

The AWARE-2. (credit: Duke University Imaging and Spectroscopy Program)
Researchers at Duke University and the University of Arizona report in this week’s Nature they’ve built a functional prototype gigapixel-scale camera. The Orwellian-sounding AWARE-2 uses a 98 sensor-array mounted behind a single aperture to capture an enormous image in one go.

Next Cameras Come Into View - WSJ.com


The new camera collects more than 30 times as much picture data as today's best consumer digital devices. While existing cameras can take photographs that have pixel counts in the tens of millions, the Duke device produces a still or video image with a billion pixels—five times as much detail as can be seen by a person with 20/20 vision.

A pixel is one of the many tiny areas of illumination on a display screen from which an image is composed. The more pixels, the more detailed the image.
The Duke device, called Aware-2, is a long way from being a product. The current version needs lots of space to house and cool its electronic boards; it weighs 100 pounds and is about the size of two stacked microwave ovens. It also takes about 18 seconds to shoot a frame and record the data on a disk.



AWARE2 Multiscale Gigapixel Camera


This program is focused on building wide-field, video-rate, gigapixel cameras in small, low-cost form factors.  Traditional monolithic lens designs, must increase f/# and lens complexity and reduce field of view as image scale increases. In addition, traditional electronic architectures are not designed for highly parallel streaming and analysis of large scale images. The AWARE Wide field of view project addresse these challenges using multiscale designs that combine a monocentric objective lens with arrays of secondary microcameras.

The optical design explored here utilizes a multiscale design in conjunction with a monocentric objective lens [2] to achieve near diffraction limited performance throughout the field. A monocentric objective enables the use of identical secondary systems (referred to as microcameras) greatly simplifying design and manufacturing. Following the multiscale lens design methodology, the field-of-view (FOV) is increased by arraying microcameras along the focal surface of the objective. In practice, the FOV is limited by the physical housing.  This yields a much more linear cost and volume versus FOV.  Additionally, each microcamera operates independently, offering much more flexibility in image capture, exposure, and focus parameters.  A basic architecture is shown below, producing a 1.0 gigapixel image based on 98 micro-optics covering a 120 by 40 degree FOV.