When Rain Stops the Future: Why Robotaxis Can’t Scale Without Solving the Weather Problem

By early 2026, robotaxis are completing millions of rides monthly across cities from San Francisco to Shanghai—yet a single rainstorm can still paralyze entire fleets. While the industry races toward full autonomy, adverse weather has emerged as the critical bottleneck preventing widespread deployment. The challenge isn’t just about better sensors; it’s about fundamentally understanding what cameras and LiDAR cannot perceive: the physical grip between tire and road. Companies like Easyrain are pioneering a solution that bridges this gap through virtual sensing and active safety systems, offering a pathway to truly weather-resilient autonomous mobility.
The Current Landscape: Autonomy Hits a Wall
The robotaxi market has reached an inflection point. Waymo, the market leader, operates over 2,500 vehicles completing 450,000 paid rides weekly as of late 2025, with plans to scale to 1 million rides per week by the end of 2026. In China, Baidu‘s Apollo Go delivered 3.1 million fully driverless rides in Q3 2025 alone, operating across more than 20 cities.
Yet beneath these impressive numbers lies a persistent vulnerability. Tesla‘s limited Austin deployment faced immediate scrutiny when vehicles reportedly “gave up” during heavy downpours, requiring passenger extraction mid-ride. Even Waymo’s more sophisticated sensor suite exhibited “indecisive” behavior in rain, pulling over frequently or failing to locate pickup points due to sensor noise. NHTSA opened a formal review in June 2025 specifically targeting Tesla’s “Vision-Only” performance in low-visibility conditions.
The economic implications are stark. For a fleet of 1,000 robotaxis, a single 4-hour rainstorm can block hundreds of thousands of dollars in potential revenue while eroding the user trust essential for mass adoption. In cities like Seattle, Boston, or London—where precipitation occurs 15-20% of the year—”fair-weather” autonomy is fundamentally unviable as a public transit replacement.

The “Blind Spot” of Sensors: Why Cameras Are Not Enough
The autonomous vehicle industry has invested billions in sensor technology, yet adverse weather exposes fundamental limitations in how machines perceive their environment.
LiDAR , the backbone of most robotaxi perception systems, relies on laser pulses to build 3D point clouds. Raindrops and snowflakes scatter these beams, creating “salt-and-pepper” noise that degrades range and generates phantom obstacles. As precipitation intensity increases, detection accuracy plummets—forcing vehicles to slow dramatically or stop entirely.
Cameras , particularly in Tesla‘s vision-only architecture, face even more severe constraints. Water droplets on lenses cause occlusion, wet asphalt creates blinding glare from streetlights, and snow obliterates the contrast needed to distinguish lane markings. As Geotab’s analysis notes, cameras “physically cannot see through heavy precipitation better than a human eye”—yet unlike humans, they lack decades of experiential learning to compensate.
Radar offers weather resilience but trades it for resolution. While radio waves penetrate rain and fog effectively, standard automotive radar lacks the vertical detail to accurately classify stationary objects. The result: false positives that cause unnecessary braking or, worse, failure to detect genuine hazards obscured by weather-induced clutter.
The industry response has been to add more sensors—thermal cameras, higher-resolution radar, redundant LiDAR arrays. But this approach inflates vehicle costs by thousands of dollars while still failing to address the core issue.
The Grip Physics Dilemma
Sensors perceive geometry , not friction . This is the fundamental paradox facing autonomous systems in adverse weather.
A LiDAR can map a snow-covered road surface with millimeter precision. A camera can detect ice crystals. But neither can reliably distinguish between wet asphalt offering adequate traction and black ice that will cause complete loss of control the instant brakes are applied. The critical gap is tactile feedback—the physical sensation human drivers use to “feel” when grip is deteriorating.
Aquaplaning exemplifies this challenge. When a thin water layer forms between tire and road, the vehicle essentially hydroplanes, losing both steering authority and braking effectiveness. By the time wheel slip sensors detect the problem, control is already compromised. Autonomous systems, lacking the predictive capability humans develop through experience, typically respond with extreme caution—slowing to impractical speeds or refusing to operate entirely.
Current approaches attempt to infer grip from wheel slip data or correlate weather forecasts with conservative speed limits. But these are reactive measures. What’s needed is predictive capability : the ability to detect loss of traction before it occurs, giving the vehicle time to adjust trajectory, speed, or route.
This is where the industry’s sensor-centric approach reaches its limit. No camera, LiDAR, or radar array can measure the coefficient of friction between rubber and ice. The solution requires a different paradigm entirely.

Bridging the Gap: The Role of Virtual Sensing and Active Safety
Solving the weather problem demands technologies that complement visual perception with haptic intelligence—systems that give autonomous vehicles the sense of “touch” they currently lack.
Predicting the Invisible
DAI – Virtual Sensor Platform represents a fundamental shift from observing the environment to feeling it. This virtual sensor platform analyzes vehicle dynamics—microscopic variations in wheel rotation, suspension feedback, and chassis behavior—to detect aquaplaning, snow, and ice in real-time, without requiring additional hardware .
Unlike traditional traction control systems that react to wheel slip, DAI provides predictive detection , identifying partial aquaplaning or early grip reduction before tire slip occurs. For a robotaxi navigating an unfamiliar city during a rainstorm, this translates to actionable intelligence: “Zone ahead has standing water; reduce speed by 15 km/h” rather than waiting for the wheels to hydroplane and triggering emergency protocols.
The system’s independence from internet connectivity, cloud services, or AI inference means it operates with millisecond-level latency—critical when aquaplaning can develop in less than a second. By detecting irregular terrain, tire wear, and even wheel misalignment through the same dynamic analysis, DAI functions as a comprehensive “nervous system” for the vehicle, continuously monitoring the interface between machine and road.
Restoring Control
Detection alone is insufficient if the vehicle cannot respond effectively. This is where active intervention changes the equation.
AIS – Active Safety System is the first active system capable of restoring grip before control is lost . By intelligently spraying pressurized fluid ahead of the tires, AIS eliminates the water layer that causes aquaplaning—transforming a dangerous loss-of-control event into a managed scenario where ABS and ESC can function normally.
For robotaxi operations, this represents critical safety redundancy . Even if the vehicle’s planning system miscalculates and enters a deep puddle at excessive speed, AIS provides a physical countermeasure. Testing demonstrates -20% braking distance on heavy wet surfaces and +225% lateral traction increase in aquaplaning conditions—performance margins that can mean the difference between a safe stop and a collision.
The system’s modular architecture, with configurations starting at just 2.7kg, makes it viable for integration into purpose-built robotaxis like Zoox‘s carriage-style vehicles or retrofitted platforms like Waymo’s Jaguar I-Pace fleet. Critically, AIS enables the use of low rolling-resistance tires that optimize energy efficiency—a key consideration for electric autonomous fleets where range directly impacts operational economics.
Fleet Intelligence
Individual vehicle capability must scale to fleet-level intelligence. ERC – Cloud Infrastructure aggregates real-time road surface data from equipped vehicles, creating dynamic maps of grip conditions across entire operational areas.
For a robotaxi fleet manager, this transforms weather from an unpredictable disruption into a manageable variable. When ERC identifies a section of downtown experiencing aquaplaning conditions, the dispatch system can reroute vehicles proactively, redistributing demand to safer zones while maintaining service availability. The platform’s integration of tire health data—wear levels, pressure deviations, alignment issues—enables predictive maintenance that prevents weather-related failures before they occur.
This shared intelligence model addresses what market analysts identify as a critical scaling requirement: the ability to operate reliably in variable conditions without requiring perfect information about every road segment. As the fleet grows, so does the precision of the road intelligence—a network effect that makes each additional equipped vehicle more capable than the last.
Future Outlook
The path to Level 5 autonomy—vehicles capable of operating anywhere, anytime, in any condition—requires acknowledging that perception alone is insufficient. The industry’s sensor revolution has delivered extraordinary capabilities in good weather. But true autonomy demands systems that can predict and respond to the physical dynamics that visual sensors cannot capture.
The integration of virtual sensing platforms like DAI – Virtual Sensor Platform with active safety systems like AIS – Active Safety System represents a convergence of complementary technologies: one that detects invisible threats, the other that physically neutralizes them. When coupled with cloud intelligence systems like ERC – Cloud Infrastructure, this creates a resilient architecture capable of scaling beyond fair-weather operational design domains.
Regulatory frameworks are evolving to reflect this reality. NHTSA’s scrutiny of vision-only systems and the EU’s strict ODD requirements signal that authorities recognize the limitations of current approaches. Future certifications will likely mandate demonstrable capabilities in adverse conditions—not as edge cases, but as core competencies.
For the robotaxi industry, the weather problem is no longer a distant concern. It is the immediate barrier separating today’s promising pilots from tomorrow’s ubiquitous mobility networks. Solving it requires moving beyond the assumption that adding more cameras and LiDAR will eventually be sufficient. It demands technologies that give autonomous systems what they fundamentally lack: the ability to feel the road beneath them, respond when vision fails, and learn from every rainy mile driven.
The future of autonomous mobility will not be built on perfect weather. It will be built on systems resilient enough to operate when the rain won’t stop.
Frequently Asked Questions
Why do robotaxis struggle in rain and snow?
Robotaxis rely on sensors like LiDAR and cameras that have fundamental limitations in adverse weather. Raindrops scatter laser beams creating phantom obstacles, while water on camera lenses causes occlusion and glare. More critically, these sensors can perceive geometry but not friction—they cannot detect the loss of grip between tire and road that occurs during aquaplaning or on icy surfaces. This creates a dangerous blind spot where the vehicle cannot predict when it will lose control.
What is the economic impact of weather on robotaxi operations?
For a fleet of 1,000 robotaxis, a single 4-hour rainstorm can block hundreds of thousands of dollars in potential revenue. In cities where precipitation occurs 15-20% of the year, such as Seattle, Boston, or London, “fair-weather only” autonomy is fundamentally unviable as a public transit replacement. Weather-related service interruptions also erode the user trust essential for mass adoption.
How does virtual sensing technology help autonomous vehicles in bad weather?
Virtual sensing platforms like DAI – Virtual Sensor Platform analyze microscopic variations in wheel rotation, suspension feedback, and chassis behavior to detect aquaplaning, snow, and ice in real-time without requiring additional hardware. Unlike traditional systems that react to wheel slip after it occurs, virtual sensing provides predictive detection—identifying partial aquaplaning or early grip reduction before tire slip happens, giving the vehicle time to adjust speed or trajectory proactively.
What is aquaplaning and why is it dangerous for autonomous vehicles?
Aquaplaning occurs when a thin water layer forms between the tire and road surface, causing the vehicle to essentially hydroplane and lose both steering authority and braking effectiveness. It’s particularly dangerous for autonomous vehicles because traditional sensors cannot detect it until wheel slip occurs—by which time control is already compromised. Autonomous systems lack the predictive capability humans develop through experience, typically responding with extreme caution by slowing to impractical speeds or refusing to operate entirely.
Can active safety systems restore grip during aquaplaning?
Yes. Active safety systems like AIS – Active Safety System can restore grip before control is lost by intelligently spraying pressurized fluid ahead of the tires to eliminate the water layer that causes aquaplaning. Testing demonstrates 20% reduction in braking distance on heavy wet surfaces and 225% increase in lateral traction during aquaplaning conditions. This transforms a dangerous loss-of-control event into a managed scenario where standard safety systems like ABS and ESC can function normally.