Mobile

Google developed its own mobile chip to help smartphones take better photos

Google originally turned off the Pixel Visual Core for everything except the stock camera app, but now the company is turning it on for use, even with outside applications. Developers can check out the open source documents here, and users will likely notice more color and range out of photos from other apps going forward. The update is part of this month's Android update and will go into effect once you download it.

 

Back in the film photography days, different films produced distinct "looks"—say, light and airy or rich and contrasty. An experienced photographer could look at a shot and guess what kind of film it was on by looking at things like color, contrast, and grain. We don't think about this much in the digital age; instead, we tend to think of raw digital files as neutral attempts to recreate what our eyeballs see. But, the reality is that smartphone cameras have intense amounts of processing work happening in the background. Engineers are responsible for guiding that tech to uphold an aesthetic. The new Google Pixel 2 phone uses unique algorithms and a dedicated image processor to give it its signature style.

 

 

The Pixel 2 camera was developed by a team of engineers who are also photographers, and they made subjective choices about how the smartphones photos should appear. The emphasis is on vibrant colors and high sharpness across the frame. “I could absolutely identify a Pixel 2 image just by looking at it,” says Isaac Reynolds, an imaging project manager on Google’s Pixel 2 development team. “I can usually look in the shadows and tell it came from our camera.”

 

 

On paper, the camera hardware in the Pixel 2 looks almost identical to what you'd find in the original, using a lens with the same coverage and a familiar resolution of 12-megapixels. But, smartphone photography is increasingly dependent on algorithms and the chipsets that implement them, so that's where Google has focused a huge chunk of its efforts. In fact, Google baked a dedicated system-on-a-chip called Pixel Visual Core into the Pixel 2 to handle the heavy lifting required for imaging and machine learning processes.

 

 

For users, the biggest addition to the Pixel 2’s photography experience is its new high-dynamic range tech, which is active on “99.9 percent” of the shots you’ll take, according to Reynolds. And while high-dynamic range photos aren’t new for smartphone cameras, the Pixel 2’s version, which is called HDR+, does it in an unusual way.

 

 

Every time you press the shutter on the Pixel 2, the camera takes up to 10 photos. If you’re familiar with typical HDR, you’d expect each photo to have a different exposure in order to optimize detail in the highlights and shadows. HDR+, however, takes images at the same exposure, allowing only for naturally-occurring variations like noise, splits them up into a grid, then compares and combines the images back into a single photo. Individually, the images would look dark to prevent highlights from blowing out, but the tones in the shadows are amplified to bring out detail. A machine learning algorithm recognizes and eliminates digital noise, which typically happens when you raise exposure in dark areas.