Facebook has announced a research project that aims to push the “frontier of first-person perception”, and in the process help you remember where you left your keys.
The Ego4D project provides a huge collection of first-person video and related data, plus a set of challenges for researchers to teach computers to understand the data and gather useful information from it.
In September, the social media giant launched a line of “smart glasses” called Ray-Ban Stories, which carry a digital camera and other features. Much like the Google Glass project, which met mixed reviews in 2013, this one has prompted complaints of privacy invasion.
The Ego4D project aims to develop software that will make smart glasses far more useful, but may in the process enable far greater breaches of privacy.
What is Ego4D?
Facebook describes the heart of the project as a massive-scale, egocentric dataset and benchmark suite collected across 74 worldwide locations and nine countries, with over 3,025 hours of daily-life activity video.
The “Ego” in Ego4D means egocentric (or “first-person” video), while “4D” stands for the three dimensions of space plus one more: time. In essence, Ego4D seeks to combine photos, video, geographical information and other data to build a model of the user’s world.
There are two components: a large dataset of first-person photos and videos, and a “benchmark suite” consisting of five challenging tasks that can be used to compare different AI models or algorithms with each other. These benchmarks involve analyzing first-person videos to remember past events, create diary entries, understand interactions with objects and people, and forecast future events.
The dataset includes more than 3,000 hours of first-person video from 855 participants going about everyday tasks, captured with a variety of devices including GoPro cameras and augmented reality (AR) glasses. The videos cover activities at home, in the workplace, and hundreds of social settings.
What is in the data set?
Although this is not the first such video dataset to be introduced to the research community, it is 20 times larger than publicly available datasets. It includes video, audio, 3D mesh scans of the environment, eye gaze, stereo, and synchronized multi-camera views of the same event.
Ego4D is a massive-scale egocentric video dataset and benchmark suite.
It offers 3,025 hours of daily life activity video spanning hundreds of scenarios captured by 855 unique camera wearers from 74 worldwide locations and 9 different countries.https://t.co/oJHBTdQp3b pic.twitter.com/K90k9MQHyQ
— Papers with Datasets (@paperswithdata) October 14, 2021
Most of the recorded footage is unscripted or “in the wild”. The data is also quite diverse as it was collected from 74 locations across nine countries, and those capturing the data have various backgrounds, ages and genders.
What can we do with it?
Commonly, computer vision models are trained and tested on annotated images and videos for a specific task. Facebook argues that current AI datasets and models represent a third-person or a “spectator” view, resulting in limited visual perception. Understanding first-person video will help design robots that better engage with their surroundings.
Furthermore, Facebook argues egocentric vision can potentially transform how we use virtual and augmented reality devices such as glasses and headsets. If we can develop AI models that understand the world from a first-person viewpoint, just like humans do, VR and AR devices may become as valuable as our smartphones.
Can AI make our lives better?
Facebook has also developed five benchmark challenges as part of the Ego4D project. The challenges aim to build a better understanding of video materials to develop useful AI assistants. The benchmarks focus on understanding first-person perception. The benchmarks are described as follows:
- Episodic memory (what happened when?): for example, figuring out from a first-person video where you left your keys
- Hand-object manipulation (what am I doing and how?): this aims to better understand and teach human actions, such as giving instructions on how to play the drums
- Audio-visual conversation (who said what and when?): this includes keeping track of and summarising conversations, meetings, or classes
- Social interactions (who is interacting with whom?): this is about identifying people and their actions, with a goal of doing things like helping you hear a person better if they’re talking to you
- Forecasting activities (what am I likely to do next?): this aims to anticipate your intentions and offer advice, like pointing out you’ve already added salt to a recipe if you look like you’re about to add some more.
What about privacy?
Obviously, there are significant privacy concerns. If this technology is paired with smart glasses constantly recording and analyzing the environment, the result could be constant tracking and logging (via facial recognition) of people moving around in public.
Facebook says it will maintain high ethical and privacy standards for the data gathered for the project, including consent of participants, independent reviews, and de-identifying data where possible.
As such, Facebook says the data was captured in a “controlled environment with informed consent”, and in public spaces “faces and other PII [personally identifying information] are blurred”.
But despite these reassurances (and noting this is only a trial), there are concerns over the future of smart-glasses technology coupled with the power of a social media giant whose intentions have not always been aligned to their users.
The ImageNet dataset, a huge collection of tagged images, has helped computers learn to analyze and describe images over the past decade or more. Will Ego4D do the same for first-person video?
We may get an idea next year. Facebook has invited the research community to participate in the Ego4D competition in June 2022, and pit their algorithms against the benchmark challenges to see if we can find those keys at last.
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural