Tushar Chhabra

From Indpaedia
Revision as of 20:36, 7 December 2023 by Jyoti Sharma (Jyoti) (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Hindi English French German Italian Portuguese Russian Spanish

This is a collection of articles archived for the excellence of their content.
Additional information may please be sent as messages to the Facebook
community, Indpaedia.com. All information used will be gratefully
acknowledged in your name.

Cron.ai

LIDAR: 2023

Akhil George, Nov 25, 2023: The Times of India


MAKING PUBLIC SPACES SMART WITH LIDAR

Tushar Chhabra’s Cron.ai has built a Lidar-based 3D-perception system that’s being used in smart city projects in Europe, and to detect intrusions on Indo-Pak border

Akhil George


How do you make a public space smart? At the most basic level, you would expect camera sensors to be able to continuously perceive its surroundings, since public spaces are dynamic, with people and vehicles sporadically entering and exiting the sensor range. Once a 3D visual map is created, you can then do what you want with the data. At a traffic signal, it could be to monitor traffic violations, while in a mall, it could be to track how many people are entering which stores.


There is an issue though. Training a camera to recognise and tag what it is seeing, in real time, is challenging. In the traffic signal example, a bus with its lights on could blind the camera sensor, making it difficult for the sensor to distinguish in real-time whether there are smaller vehicles in the vicinity. There is a solution to this problem. Install a Lidar sensor at the signal instead of a smart camera, then use the Lidar to send out hundreds of laser beams to constantly monitor its surroundings. Now use software and AI to get the system to constantly tag what it is seeing. The results from the two methods are starkly different (see the visuals above).


Smoke, heavy dust, bright lights, nothing affects the perception accuracy of Lidar-based 3D-perception systems.
 Tushar Chhabra has been making the case for this tech for several years, but it’s become much easier now, with the company he co-founded, Cron.ai, having a product that can practically showcase the advantages of this system. Cron is headquartered in London, but has an office in Delhi as well.


Tushar says that while Lidar has evolved rapidly over the last few years, the accompanying software was abysmally behind. He came to this realisation while living near the Indo-Pak border for six months. “Billions of dollars were being spent on hardware, but there was a need for software solutions.”


It took Tushar and his engineers four years, but in the end, they developed senseEDGE. Processing raw 3D data from Lidars, this IP-driven technology operates on the embedded edge, handling millions of data points. What sets senseEDGE apart is its deep learning approach, eliminating traditional bottlenecks and ensuring seamless integration into diverse applications. “Whether navigating complex traffic in autonomous vehicles or enhancing situational awareness in smart city infrastructure, senseEDGE excels. Its unique ability to process occluded objects adds an extra layer of sophistication,” he says.


Tushar says that as autonomous vehicles, smart cities, and industrial automation advance, tech like senseEDGE will be crucial to propel realworld innovations. “Autonomy can be made accessible to anyone by allowing machines and infrastructure to perceive the world around them in three dimensions,” he says.


The software can be deployed by anyone with a Lidar sensor. It has the ability to monitor individuals and vehicles while preserving privacy. It achieves this by avoiding the collection of image data, licence plates, FaceID, or any other personal information.
Cron’s tech is deployed at the Detroit Smart Parking Lab, on the Indo-Pak border to detect intrusion, and in smart city projects in Europe. But Tushar stresses that the potential use cases are only limited by one’s imagination.


You could use it to count people and do crowd analytics, or optimise vehicle flow to reduce congestion, or gain insights into the efficiency of resource allocation for optimal asset use. You could seed Lidar sensors across a city and then let autonomous vehicles rely on the 3D data collected by the sensors to figure out their surroundings.


CHALLENGES


In building this novel 3D perception system, Tushar’s team had to deal with several big challenges. The first was the non-availability of processing models which could be directly downloaded and stitched together. The second was the absence of data. The third was getting engineers who could write algorithms from scratch, as very few were capable of understanding the world in 3D.


At one point Tushar set out on the road himself, in a truck, to collect the necessary data to train their system. The funding of $9 million they had was grossly insufficient to hire a labelling team. So they built self-supervised learning models.


Tushar says Cron needed the best machine learning engineers they could get their hands on (the reason they set up their HQ in London), on top of highly expert product guides with decades of experience for the purpose of building trackers and other products, and experts in the embedded systems side for optimisation of the software.


GLOSSARY OF IN-GROUP TERMS USED ON THIS PAGE

LiDAR

Light Detection and Ranging is a remote sensing technology that uses laser light to measure distances and create detailed, three-dimensional maps of the surrounding environment. The basic principle of LiDAR involves emitting laser pulses and measuring the time it takes for the light to return after bouncing off objects. This data is then used to generate accurate and highresolution maps or models of the terrain, objects, or surfaces within the LiDAR’s line of sight. LiDAR technology is valuable in scenarios where high-precision mapping and three-dimensional data are required. Its applications continue to expand as the technology evolves and becomes more widely adopted across various industries.

Edge computing


Edge computing is a distributed computing paradigm that involves processing data closer to the source of data generation rather than relying solely on a centralized cloud server. In traditional cloud computing models, data is sent to a remote data center or cloud server for processing and analysis. In contrast, edge computing brings computation and data storage closer to the devices or “edge” of the network where the data is generated. By processing data closer to where it is produced, edge computing reduces the time it takes for data to travel back and forth between the source and a centralized cloud server. This is critical for applications that require realtime or near-real-time processing, such as autonomous vehicles, industrial automation, and augmented reality.

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox
Translate