Intel and Mobileye begin testing their autonomous fleet in Jerusalem

Intel and Mobileye have begun operating a 100-car autonomous vehicle (AV) fleet in Jerusalem. In the coming months, the fleet will expand to the U.S. and other regions. The fleet is powered only by cameras. In a 360-degree configuration, each vehicle uses 12 cameras, with eight cameras providing long-range surround view and four cameras utilized for parking. The radar/lidar layer will be added in the coming weeks as a second phase.

The first phase of the Intel and Mobileye 100-car autonomous vehicle (AV) fleet has begun operating in the challenging and aggressive traffic conditions of Jerusalem. The technology is being driven on the road to demonstrate the power of the Mobileye approach and technology, to prove that the Responsibility-Sensitive Safety (RSS) model increases safety, and to integrate key learnings into our products and customer projects. In the coming months, the fleet will expand to the U.S. and other regions.

During this initial phase, the fleet is powered only by cameras. In a 360-degree configuration, each vehicle uses 12 cameras, with eight cameras providing long-range surround view and four cameras utilized for parking. The goal in this phase is to prove that they can create a comprehensive end-to-end solution from processing only the camera data. They characterize an end-to-end AV solution as consisting of a surround view sensing state capable of detecting road users, drivable paths and the semantic meaning of traffic signs/lights; the real-time creation of HD-maps as well as the ability to localize the AV with centimeter-level accuracy; path planning (i.e., driving policy); and vehicle control. The sensing state is depicted in the videos above as a top-view rendering of the environment around the AV while in motion.

The camera-only phase is their strategy for achieving what they refer to as “true redundancy” of sensing. True redundancy refers to a sensing system consisting of multiple independently engineered sensing systems, each of which can support fully autonomous driving on its own. This is in contrast to fusing raw sensor data from multiple sources together early in the process, which in practice results in a single sensing system. The company claims true redundancy provides two major advantages: The amount of data required to validate the perception system is massively lower (square root of 1 billion hours vs. 1 billion hours) as depicted in the graphic below; in the case of a failure of one of the independent systems, the vehicle can continue operating safely in contrast to a vehicle with a low-level fused system that needs to cease driving immediately.

The radar/lidar layer will be added in the coming weeks as a second phase of their development and then synergies among sensing modalities can be used for increasing the “comfort” of driving.

 

Source: https://newsroom.intel.com/editorials/if-you-can-drive-jerusalem-you-can-drive-almost-anywhere/

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*