Photo filters are blunt instruments. The Hefes and Amaros and Kelvins of the world—the algorithms that transform our snapshots into light-washed little works of EmoArt—are extremely un-selective about the modifications they make to our images. "Style transfer," the translation of plain images according to those algorithms, does its translating for an entire image. Which works fine for the typical Instagram shot (of, say, a blackberry gelato, a silkscreened sunset, a babbling brook), but much less fine for the photos that tend to be the most intimate of the images we take: portraits. Our faces, it turns out, are extremely hard to filter—even when you have the app that promises to make you look flawless.
And that's because our minds are extremely attuned to the nuances of other people's appearances. "Our eyes are so sensitive to human faces," YiChang Shih, an MIT graduate student in electrical engineering and computer science, tells PhysOrg. "We're just intolerant to any minor errors." Which means that the filters we use to stylize our pictures may be great for images of nature and baked goods... but they're not great for stylizing each other. Or ourselves.
Shih and his colleagues, however, say they've solved that problem. And they've done so through a technique that localizes algorithmic filters—applying them, like so many topical headache medications, directly to the face. So: you know how portrait photographers like Diane Arbus and Richard Avedon had signature styles—the results of the artists' intentional adjustments of lens and light and shadow? Shih and his colleagues are claiming that they can replicate those styles, essentially, via algorithm.
And achieving that would be more difficult than it might seem to the average Instagrammer. The typical digital photo filter works through what are known as "global parameters"—things like exposure, color shift, and global contrast, all of them working in tandem on the pixels of an image. "We started with those filters," Shih says, "but just found that they didn't work well with human faces."
So he and his team—a group of scientists from MIT, Adobe Systems, and the University of Virginia—adopted a multi-step approach that uses the "local statistics" of an image, allowing for a granular approach to face filtration. Using off-the-shelf facial recognition software, the team first identified portraits that resembled—in facial orientation, style, and the like—that resembled the image they wanted to modify. "We then find a dense correspondence—like eyes to eyes, beard to beard, skin to skin," Shih says. And, from there, they do what they call a "local transfer"—a graft of the corresponding features from the model image onto the original.
Shih and his team will present their work at the graphics conference Siggraph in August. And, from there, they're exploring commercial applications of their findings. Which means that, at some point not too far in the future, you could apply the "Arbus" filter to your selfie. You could be-Avedon yourself, using your phone.
Via MIT
Image may be NSFW.Clik here to view.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Clik here to view.

Clik here to view.
