<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=524891506416251&amp;ev=PageView&amp;noscript=1">

What is Multimodal Traffic Data?

Definitions

Multimodal traffic data is the collection and analysis of how all road users—including pedestrians, cyclists, micromobility users (e-scooters, e-bikes), and vehicles—move and interact across a street network. Unlike legacy monitoring methods that are mostly useful for tracking single modes in isolation (such as inductive loops for cars), multimodal data provides a holistic view of "street activity," enabling transportation planners to understand how different modes interact, compete for space, and flow through a network.

Key Components of Multimodal Data

To support initiatives like Vision Zero and Complete Streets, multimodal data must go beyond simple volume counts. A robust dataset typically includes four distinct layers of insight:

    • Volume & Flow: 24/7 counts of road users moving through a specific location, often analyzed alongside peak travel times and seasonal variations.

    • Classification: Accurately distinguishing between specific road user types to enrich the volume & flow data. Standard classifications include pedestrians, cyclists, e-scooters, passenger cars, light goods vehicles (LGVs), buses, and heavy goods vehicles (OGV1/OGV2).

    • Behavior (Speed, Journey Times, and Trajectories): Further enriching the classified data with behavior information. This will include understanding the speeds of road users, their journey times, and visualizing the specific "desire lines" or paths taken by users. This data reveals a lot about safety, building understanding of how cyclists navigate intersections, where pedestrians jaywalk, or if vehicles are encroaching on bike lanes.

    • Safety Metrics (Near Miss): Proactive safety indicators that identify high-risk interactions between modes (e.g., a truck and a cyclist) using metrics like Time to Collision (TTC) and Post-Encroachment Time (PET). These Near-miss metrics are used to identify relative risk exposure and prioritise locations, not to predict individual crashes.

How is Multimodal Data Collected?

While older technologies like pneumatic tubes or manual counts are limited to specific modes or short durations, modern cities increasingly rely on Computer Vision (AI) sensors.

    • Computer Vision Sensors (AI): These devices use artificial intelligence to detect and classify road users in real-time. For example, the New York City Department of Transportation (NYC DOT) utilizes these sensors to count up to nine different modes of travel simultaneously, including counting turning movements and detecting "near-miss" events, to inform street redesigns.
    • Privacy standards: Advanced multimodal sensors utilize "Privacy by Design" principles, processing video on the edge (on the device itself) and discarding the raw video frames 99.9% of the time, ensuring no personally identifiable information (PII) is stored & only anonymous statistical data is shared.

Applications in Transport Planning

Multimodal traffic data is the evidence base for four primary municipal objectives:

    • Active Transportation & Modal Shift: Measuring the uptake of sustainable travel modes (walking, cycling) to justify infrastructure investments, such as protected bike lanes, School Streets, or Low Traffic Neighborhoods.

    • Strategic Planning & Network Baselines: Establishing a comprehensive, city-wide baseline of road usage to inform long-term master planning and traffic modeling. Unlike short-term project counts, this application uses continuous data to understand macro-level trends—such as the interplay between freight, transit, and private vehicles—ensuring that fundamental design principles for 30-year infrastructure projects account for the movements of all transport modes, not just cars.

    • Proactive Road Safety: Moving beyond reactive crash data (KSI stats) to identify "near-miss" hotspots. This allows engineers to diagnose risk exposure and intervene before collisions occur.

    • Smart Signal Control: optimizing traffic signals based on the presence of vulnerable road users (VRUs) or buses, rather than just vehicle queues, to improve equity and flow efficiency.

Data Accuracy and Validation

Reliable multimodal data requires high accuracy, particularly for smaller road users like pedestrians and e-scooters, which are often missed by radar or thermal sensors. Leading computer vision systems are independently validated to achieve 99% count accuracy and 97% class accuracy or higher, under appropriate installation and operating conditions.

Further Reading