For years, tech savvy people have known that photos shot on your phone contain lots of information that you may not want revealed. The specific model of phone you use and the precise time and location of where the photo was shot are all saved in the photo’s metadata. This information can be viewed in pretty much any image viewing app and can be used to put you at a specific time and place—which depending on your work, relationships or general desire for privacy, you may not want to share with whoever might be looking.
Stripping out the metadata in your photos is not too difficult. Here is a handy guide, but a simple trick is just to take a screenshot of your photo before posting it. The screenshot will contain metadata only about the time and location of the screenshot, not the time the photo was originally taken.
But metadata is not all you should be thinking about. Tools and techniques that were once available only to intelligence agencies to collect “open source intelligence” (known as OSINT in national security parlance) are now available to amateur sleuths. These techniques can be used to reveal personal identifying information in your photos, even if you have taken care to lock down your metadata.
Let’s go through some of the things you should be thinking about before you share that photo.
Who is in your photo?
Let’s start with the main thing you probably are using your camera for: photos of your family and friends. Facial recognition technology has become so ubiquitous, it’s not hard to imagine that somebody in your photo may be easily identifiable. So how easy is it for someone to identify the people in the photos that you post online?
Law enforcement can definitely do this sort of identification. Federal, state and local authorities have databases, populated by photographs from police cameras, drivers licenses, passports and mugshots, for use in crime prevention and investigations (although a recent seven-year experiment in San Diego using facial recognition as a policing tool just ended with very little evidence that it helped solve any crimes).
And law enforcement’s access to photos is growing. A startup called Clearview AI, first reported by The New York Times, claims to have obtained more than three billion faces and their respective identities from public profiles on YouTube, Facebook, and other large platforms. Clearview’s app—which some law enforcement sources have claimed is more powerful than existing law enforcement facial recognition tools—has been in use by more than 2,200 law enforcement agencies. But concerns about this tool being used outside of law enforcement have grown with recent revelations showing that the company has been allowing others to try its technology, including big retail chains, schools, casinos, and even some individual investors and clients.
For most civilians, however, facial recognition is still difficult to access. If you post a photo to Facebook, its own enormous custom facial recognition database can identify other Facebook users, and in some cases it will prompt you to tag them. Google and Apple can also identify faces of your friends and family (that you have labeled) in your photo library. These labels are private, and both Google and Apple say they do not attempt to match these faces against real identities.
Russian search engine Yandex, which appears to be using a different, more powerful face-matching technology, is one of the only sites that lets you actually find similar looking faces from an uploaded photo.
Most law enforcement facial recognition tools use a collection of measurements of facial features and their relationships to each other, which makes a unique “face print” that is matched against other known faces in a database. The company has not acknowledged using this technology, but it clearly works differently than Google Reverse Image Search does and can find similar looking faces. I tested it by searching Yandex for an image of my face against an unusual background that had never been posted online before, and it matched me with a staff photo from a previous job, along with a lot of other people who looked a lot like me. The same test on Google found images of faces that didn’t look like me at all, but the photos contained similar colors and textures to those in my picture.
This is likely to work better if you are searching for someone who is from Russia or a former Soviet Republic, as the results appear to be disproportionately from people in those areas. We reached out to Yandex for more detail on how its image search works, but the company declined to comment.
Google said it does not use the same powerful face-matching technology in search results as Yandex. Google’s Reverse Image Search will help you only if you are looking to match a photo that is already posted somewhere online. “Neither related search term suggestions nor visually similar image results for a reverse image search rely on biometrics/facial recognition technology,” a Google spokesperson said. (The company has posted more detail here about the use of artificial intelligence technologies in its products.)
The bottom line is that it is currently pretty hard for people outside of law enforcement to identify non-Russian people in your photos with facial recognition, but that could change as tools like Clearview.ai become more widely available.
Where was the photo taken?
If you have granted your camera app permission to access your location, your photo metadata contains the latitude and longitude of more or less exactly where the photo was taken, including altitude and usually which direction the phone was pointed in. But even if you have taken active steps to limit your location from being revealed by disabling these permissions, location can still be determined using new tools and clever investigative techniques. This is an area of great interest to the U.S. intelligence community, evidenced by such research efforts as IARPA’s Finder Program.
Investigative journalists at Bellingcat employ these techniques by scouring public social media photos to help determine the precise locations of missile launchers in Ukraine, terrorist executions in Libya, and bombings in Syria. By identifying buildings, trees, bridges, utility poles and antennas, the Bellingcat investigators have helped advance the use of forensic photo and video analysis by sharing their techniques and offering training to journalists and researchers.
License plates, store and street signs, billboards, even T-shirts in your photo may give a clue about language and could help narrow down your possible location. Unusual architectural features, such as church spires, bridges or monuments, can be more easily reverse-image searched. Even the reflections in your photos (and your eyeballs) can contain information that can be used for geolocation. Recently in Japan, a young pop-star was stalked and assaulted by a man who identified a building in the reflection of her sunglasses that revealed where she lived.
But human sleuths have their limitations, and computers are getting much better at identifying locations automatically. If you have uploaded any non–GPS tagged images to Google Photos, you may have noticed that they may still show location information. Google is using its computer vision algorithms to determine the likely location for your non-GPS shots based on how similar your pictures are to other known examples in Google’s data, along with other pictures and time stamps in your library.
For example, the photo above was taken in Boston, and based on the unique outline of the buildings against the sky, Google Photos was able to place a specific pin on a map locating where the photo was taken, even though there was no geolocation data in the photo.
It’s not just Google that can identify where a photo was taken. PeakVisor offers a glimpse into where this automated location detection may be headed. It allows users to upload a photo and match it with known landscape features such as mountains and hills.
Between humans sleuthing for location clues and automated detection by computer, locations can be detected in a number of ways. You should assume your location could be identified if your photos contain substantial detail of buildings or landscape.
When was the photo taken?
Time and date are difficult to determine but not impossible. The “when” of a photo can sometimes be narrowed down by looking at weather, natural features, and light.
The weather conditions in your photo can also give more of a clue to when a photo was taken than you think. WolframAlpha provides detailed historical meteorological data for any weather station (think zip code level) that usually includes cloud cover, temperature, precipitation, and other important atmospheric data that could help confirm the time and date that a photo was taken.
Of course these techniques can also just as easily be used to prove when a photo was not taken. See the example of this photo of former Trump adviser George Papadopoulos. In early 2017, Papadopoulos started cooperating with federal investigators looking into Russian interference in the 2016 presidential election. His passport was seized, preventing him from traveling abroad. But on Oct. 25, 2017—just days before his guilty plea was announced—Papadopoulosposted a picture of himself in London, with the caption #business, implying that he was in the U.K. Journalists at Bellingcat compared the markings on a streetlight in the background with images on Google Street View and noticed that those markings were no longer on the streetlight in recent photos. They determined that the photo had been taken years before.
Zoom in before you post
Simply looking carefully at any image taken, in your home for instance (yes, zoom in all the way), may catch sensitive info in the background. Consider asking a friend to take a careful look at a photo before you post it. It’s easy to overlook things that we see every day, and having fresh eyes look at it can help catch something you may miss.
This article was originally published on The Markup by Jon Keegan and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.