New editing tool makes selfies look much more flattering

Selfies have become such a popular photography style that even you grandmother has probably been involved in one at some point. Around the world millions of people are holding their cameras an arms-length away and snapping self-portaits. Unfortunately, due to the camera’s proximity, selfies tend to make subjects’ noses look larger, their ears looks smaller and their foreheads have more of a slope.

So, how can selfies be modified to be more flattering?

A team of researchers from Princeton University has invented a tool that corrects the distortions that occur in selfies. The new method transforms a human’s face by making it look like it was shot from father away.
The editing tool can also alter someone’s apparent pose, to make it seem that the camera was placed higher, lower, or at an angle. When superimposed, images adjusted in this manner can further be used to generate 3D head shots.
Eventually using the technology, the researchers say, we could even make “live” photos that seem to move (think pictures in Harry Potter’s Hogwarts School).

“Although it is the age of the selfie, many people are unaware of how much these self-portraits do not really look like the person being photographed because the camera is way too close,” said Ohad Fried, lead developer of the new method and a Ph.D. candidate in the Department of Computer Science at Princeton University. “Now that people can edit so many aspects of a photo right on their phones, we wanted to provide a quick way to edit faces that maintains realism.”

The Princeton tool: Original vs. modified image. (Image Credit: Princeton University)
The Princeton tool: Original vs. modified image. (Image Credit: Princeton University)

According to the team, it is the first to address selfie distortions. The Princeton team has developed an online demonstration in which users can slide controls to make adjustments in images.

“As humans, we have evolved to be very sensitive to subtle cues in other people’s faces, so any artifacts or glitches in synthesized imagery tend to really jump out,” said Adam Finkelstein, senior author of the paper and a professor of computer science. “With this new method, we therefore had to make sure the photo modifications looked extremely realistic, and we were frankly surprised at the fidelity of the results we were able to obtain starting from just a single photo.”

 

To come up with this method, the researchers  used a model for generating digital, 3D human heads, which came from FaceWarehouse, a database of 150 people photographed in 20 different poses, which was compiled by Zhejiang University researchers. Then they employed a program from researchers at Carnegie Mellon University, which identified nearly six dozen reference points across someone’s face, such as the corners of the eyes, top of the head, chin, and so on, when presented with a selfie.

The new method then adjusts the 3D head model so that it corresponds to the points detected on the face. So in once instance the eyes in the 3D model then corresponded to where the subject’s eyes were in the selfie. This created the underlying 3D model of the 2D selfie image.

From that point, the 3D model just needed to be modified, which took place in just seconds.

“I believe the reason the synthetic image looks so good is that it has exactly the same pixel colors as in the original photo—it’s just that they have been moved around a little bit to provide the illusion that the camera had been in a different location,” said Finkelstein.

Before the team can consider commercial development or release of its new tool, it wants to focus on honing the photo-editing aspect.

The researchers still need to deal with the hair aspect. When warped in the same manner as other cranial characteristics, hair can then look distorted because of its varied texture, styling and color.

In addition, another challenge to tackle is that of generating a missing feature, such as a left or right ear, that’s not included in the original image due to the subject’s pose.

Visit the project’s website here for more information.

 

Comments are closed, but trackbacks and pingbacks are open.