Over the past few years, Apple vehemently denied that it was interested in building self-driving cars in order to keep its efforts a secret; the company confirmed it was working on autonomous tech only in June, but hadn’t shared much insight into its progress – until now.
A paper published last week by Yin Zhou and Oncel Tuzel, who are AI and machine learning researchers at the company, represents one of the first major breakthroughs we’ve seen from Apple’s self-driving project. And although it hasn’t been tested in the real world, it already seems like a notable development that could make rivals sit up and take notice.
The duo at Apple have devised what they’re calling VoxelNet, an architecture for detecting small obstacles using the Light Detection and Ranging (LiDAR) sensing method. The researchers note that VoxelNet is better than state-of-the-art LiDAR-based systems at spotting not just cars, but also pedestrians and cyclists. They explained:
VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.
Hopefully, Apple will continue to share more of its work in the field over time. The company had been spotted testing LiDAR-equipped SUVs on roads in California, and even began trialing self-driving short-haul shuttles between its campuses earlier this year.
You can find Zhou and Tuzel’s full paper on this page (PDF).