Test Eye2Drive Sensor Graph
Logarithmic vs. Linear Response
This widget visually demonstrates the superior Dynamic Range of Eye2Drive technology compared to standard sensors. It plots the sensor’s voltage output against increasing light intensity, mathematically proving why standard sensors go “blind” in bright light while Eye2Drive remains effective.
How to use this Widget
- Start at “Office” Levels (Low Light): Drag the slider to the left (around 100-200 Lux). Notice that both the Red (Standard) and Blue (Eye2Drive) lines rise similarly. In normal lighting, both sensors capture data effectively.
- Push to “Direct Sun” (High Light): Drag the slider to the far right.
- Standard Sensor (Red): Watch the line hit a flat ceiling. This is Saturation. The sensor has reached its maximum voltage and stops recording new data, resulting in a “whiteout” image where obstacles become invisible.
- Eye2Drive Sensor (Blue): Watch the line continue to curve gently. It follows a Logarithmic path, compressing the bright data rather than cutting it off. This ensures that even in extreme brightness (like a tunnel exit), the sensor preserves detail and “sees” clearly.
Tunable Pixel Grid
This widget creates a grid of “pixels” that react to a virtual light source (your mouse cursor). It includes a toggle to compare a Standard Sensor (which saturates and blooms) with the Eye2Drive Sensor (which adapts individual-pixel gain to preserve detail).
How to use this Widget
- Standard Mode (Red/White): Move the mouse. Notice how the bright center “clips” to white (loss of detail) and “blooms” (bleeds) into surrounding pixels. This mimics traditional sensor failure.
- Eye2Drive Mode (Cyan/Green): Switch the toggle. Move the mouse. Notice that the center pixels turn Green (indicating active Gain Reduction/Protection). The pixels never saturate to white, and there is no blooming. This visualizes independent pixel adaptation.
Silicon Retina Exploded Stack
This widget creates a volumetric representation of the Eye2Drive sensor architecture. It allows users to “explode” the chip layers to see the internal stacking and click on specific layers to understand the unique bio-inspired circuitry that differentiates your technology from standard CMOS sensors.
How to use this Widget
- Explode the Stack: Drag the “Layer Separation” slider. The chip components will vertically separate, revealing the internal structure.
- Explore the Layers: Hover over or click each layer (Microlens, CFA, Photodiode, Logic) to reveal technical specifications specific to Eye2Drive patents (e.g., “Local Adaptation Circuitry”).
- 3D Tilt: Move your mouse over the widget to slightly tilt the 3D model for a better angle.
“Bit-Depth” Quality Toggle
This widget simulates the raw data feed that a computer vision system receives. It procedurally generates pixel data, allowing you to visually compare the Quantization Noise (Banding) and Shadow Noise typical of standard linear sensors against the smooth, information-rich output of the Eye2Drive sensor.
How to use this Widget
- Standard Linear Mode (Default): Observe the gradient (representing a sky or low-light wall). Notice the ugly “bands” of color (Posterization) and the static “fizz” in the dark areas (Noise). This “dirty” data confuses AI algorithms, causing them to miss objects.
- Toggle to Eye2Drive: Click the “Data Fidelity” switch.
- Eye2Drive Mode: The gradient becomes perfectly smooth. The banding disappears, and the noise in the shadows is eliminated. This demonstrates the clean, high-fidelity data stream that makes your sensor “AI-Ready.”