The photo editing app FaceApp has gone viral (again) due to features that allow users to “age themselves” through the software that enhances a photo to show what that person will look like decades into the future. Like many viral sensations, FaceApp went viral after many influencers and celebrities posted their old-age photos on social media with the hashtag #AgeChallenge.
Why it Matters:
FaceApp, developed by a Russian team, first went viral two years ago with its face-altering technology that showed a user what they would look like as a different gender. At one point, however, it came under scrutiny and controversy with its ability to change a person’s photo from one ethnicity to another, resulting in claims of digital “blackfacing.” With the current surge in popularity, other issues are coming to the foreground including security issues.
Some of the security questions revolve around whether the app uploads a user’s camera roll in the background, and how it allows the user to select photos without manually authorizing photo access. Security researchers have found no evidence of background uploading, and the photo selection without access permission is allowed by Apple.
In an API introduced in iOS 11, a user can choose a single photo to use in an app without granting that app full access to the photo library or camera roll. The end game here is clear: intent. By allowing access to one specific photo at a time, rather than the entire library, user intent is front and center. However, it does still call into question the ability of an app to be able to access any photo when the Photo Access permissions are explicitly set to Never. As TechCrunch points out, “Never is not a default, it is an explicit choice and that permanent user intent overrules the one-off user intent of the new photo picker.”
The Bottom Line:
Photo access is a major security concern, and the issues raised around popular apps such as FaceApp raise valid questions that users should be asking.