Trend
Robotaxis Were the Future — Autonomous Sensing is Bringing Them into Our Cities
Robotaxis start blending into urban lifeDriverless vehicles have stepped out of the realm of science fiction and onto the streets of today’s cities. With just a tap on a smartphone, passengers can now summon robotaxis—autonomous vehicles that operate without a human driver—to take them where they need to go. Once considered a bold experiment, these on-demand, self-driving services are fast becoming a fixture of urban transportation systems around the globe.The commercial rollout of robotaxis is already underway in cities worldwide, signaling a turning point where next-generation mobility becomes a part of everyday life. Tesla, one of the companies at the forefront of this transition, has drawn attention with its CyberCab—an electric robotaxi that pairs strikingly futuristic design with advanced autonomous capabilities, evoking imagery straight out of a sci-fi blockbuster. Meanwhile, automakers and tech firms alike are pushing forward on multiple fronts, refining software, validating complex systems, and adapting to unpredictable urban environments—bringing the vision of fully autonomous transport ever closer.But how are people responding to this driverless revolution? According to JD Power’s 2024 U.S. Robotaxi Experience Study, 76% of those who’ve ridden in a fully autonomous vehicle say they trust the technology—a remarkable leap from the 20% trust level among those with no firsthand experience. Even among residents in cities where robotaxis are in operation, trust rose to 34%, suggesting that simply seeing these vehicles in action can shift public perception. Familiarity, it seems, breeds confidence.As robotaxis become a more familiar sight in our everyday lives, the barriers to embracing this new technology gradually fall away—along with the unease and uncertainty that often accompany the unfamiliar. Even now, these autonomous vehicles are navigating our streets, collecting real-time data, adapting to complex environments, and quietly shaping the future of urban mobility, one ride at a time.After years of testing, commercial rollouts beginRobotaxis have long been touted as practical answers to urban challenges—from traffic congestion and spotty public transit to a growing shortage of professional drivers. Now, the industry is turning a corner. What began as isolated tech demonstrations is fast evolving into a commercial, citywide reality. The market, once experimental, is entering a period of meaningful deployment.Commercial robotaxi services are scaling rapidly, especially in the United States. On June 22, Tesla launched operations within a geofenced zone in South Austin, Texas, with plans to expand both reach and capacity over time. Waymo, meanwhile, has logged over 10 million paid robotaxi rides and currently conducts more than 250,000 rides each week in major cities including Los Angeles, San Francisco, and Austin. Meanwhile, Amazon subsidiary Zoox has announced the operation of a robotaxi manufacturing facility in Hayward, California, with an annual production capacity of 10,000 units. The company is actively preparing to enter the market in earnest, aiming to launch commercial service in Las Vegas by the end of this year.This wave of expansion represents more than a technical leap—it reflects a cultural shift in how people move through cities. Robotaxis are helping to accelerate a move away from private vehicle ownership toward shared-use models, aligning with the broader rise of the global sharing economy. Ride-hailing giant Uber has already begun adapting, forging a partnership with Waymo that allows users in select U.S. cities to book autonomous rides through the Uber app.As these services move beyond the pilot stage, expectations for the underlying infrastructure have risen. The next phase demands more than just self-driving capability. Scalable systems require pinpoint environmental sensing, precise decision-making, and rock-solid operational reliability. The race is now on to build the future-proof foundations that will carry autonomous mobility into the mainstream.The sensor technologies powering robotaxi commercializationRobotaxis, by design, must perceive and react to every driving situation without a human behind the wheel. In the chaos of city streets, that means detecting the environment with precision, making judgment calls in real time, and executing safe maneuvers—all autonomously. Powering these capabilities is the autonomous driving sensing solution, a system that serves as the vehicle’s sensory organs.As the global race to develop robotaxis accelerates, companies are taking markedly different approaches to sensor technology in an effort to set themselves apart. Tesla has committed to a vision-only system powered exclusively by cameras, aiming to reduce hardware costs and lean into a software-first model that scales efficiently. Its proprietary AI algorithms are designed to sharpen visual perception, allowing the company to balance affordability with service expansion. Meanwhile, most other players—including Waymo and Zoox—are betting on sensor fusion, combining cameras with Radar and LiDAR to overcome the limitations of any single input. This layered approach offers more robust perception and greater driving reliability, particularly in challenging road and weather conditions.As competition grows and commercial rollout accelerates, the demand for sensing platforms that are reliable, adaptable, and easy to integrate is becoming more urgent. Most systems combine three core sensor types: cameras, Radar, and LiDAR. Each sensor plays a distinct role, grounded in its own detection principles and technical strengths.When road rules matter most—signs, lights, and lanes? The camera module rapidly detects and delivers critical road infoAmong all sensing technologies used in autonomous driving, only camera modules can accurately capture visual details like color, texture, and contrast. That makes them essential for identifying traffic light changes, curved lane markings, pedestrian movement patterns, and road signs with exceptional precision. In particular, by transmitting and processing high-frame-rate images, cameras provide precise data essential for autonomous driving algorithms—offering strong capabilities in lane-keeping, traffic signal detection, and reading road infrastructure.However, camera performance remains susceptible to external elements such as rain, snow, and fog—factors that can compromise image quality. To address these challenges, LG Innotek is refining its camera module technology with a high-efficiency heating system and an optimized structural design. By directly heating the lens area to prevent the buildup of frost and snow, this approach helps maintain stable and reliable sensing performance—even under harsh weather conditions.When weather turns wild? Radar cuts through rain, fog, and snowRadar—short for Radio Detection and Ranging—uses radio waves to detect the distance and speed of surrounding objects. Thanks to its longer wavelength compared to light, it performs reliably even in low-visibility conditions like fog, rain, snow, or dust. This makes it essential for autonomous vehicles to maintain stable and safe operation, especially at night, in tunnels, or during unexpected situations.Radar also directly measures distance and relative speed, offering strong performance in long-range detection and maintaining reliability even in adverse weather conditions—making it effective for real-time speed prediction and obstacle alerts. However, its relatively low resolution limits its ability to capture fine shapes, requiring improvement in distinguishing between, for example, a person and a vehicle. To address these limitations, LG Innotek is developing next-generation 4D Imaging Radar that enhances resolution while adding vertical sensing capabilities. This advancement pushes radar accuracy beyond basic object detection, enabling precise classification of size, shape, and object type.When intersections get messy? LiDAR makes sense of the chaos from all sidesLiDAR—short for Light Detection and Ranging—uses laser pulses to build a 3D map of the world around a vehicle. It can detect both static objects like buildings and road signs, as well as dynamic ones such as pedestrians and cyclists. That level of spatial judgment becomes especially critical in complex urban environments—such as busy intersections or unsignalized crosswalks—where vehicles must interpret surroundings with high accuracy. Leveraging high-resolution point clouds generated by LiDAR, vehicles can analyze object distance, shape, and position in real time to navigate safely and confidently.While Time-of-Flight (ToF) LiDAR is well-regarded for its ability to deliver precise distance and shape recognition, it has certain inherent constraints—particularly in measuring velocity. Additionally, its sensing accuracy can be affected under specific environmental conditions such as snow, rain, or intense sunlight, due to the scattering and reflection of laser signals. In response to these challenges, a newer approach known as Frequency-Modulated Continuous Wave (FMCW) LiDAR is gaining attention as a promising alternative.LG Innotek is developing FMCW LiDAR as part of its next-generation sensor portfolio—a system designed to enhance conventional LiDAR with precise velocity detection. It also offers greater resilience in harsh environments, maintaining stable performance in heavy rain or intense sunlight. A key feature is its built-in self-awareness, which allows it to cancel out signals emitted from surrounding LiDAR units—effectively minimizing external interference. As a result, sensing technology in autonomous vehicles is becoming more accurate and more stable, playing a critical role in the push toward safer self-driving systems. FMCW LiDAR is now positioning itself as a next-generation solution, helping the industry move beyond previous technological barriers. In this way, it is crucial for robotaxis to combine various sensors to precisely perceive their surroundings and respond flexibly across diverse driving environments.What will it take to make autonomous driving mainstream?Each sensor in an autonomous driving system brings unique capabilities—but also its own limitations. That’s why sensor fusion, the real-time integration of diverse sensor data, has become central to making autonomy work. It boosts detection precision, speeds up reaction time, and minimizes false readings. As a result, sensor fusion is increasingly viewed as a key technology for ensuring the reliability and safety of Level 4 and 5 autonomous driving, including robotaxis.LG Innotek becomes a one-stop hub for sensor fusion and autonomous sensing solutionsLG Innotek is leveraging its deep expertise in smartphone camera modules and automotive electronics to build a next-generation sensing portfolio that covers the full spectrum of ADAS and in-cabin camera modules, Radar, and LiDAR. This full-spectrum lineup positions the company to meet a broad range of customer needs and adapt to diverse autonomous driving strategies.With its advanced design and manufacturing capabilities, LG Innotek boosts the performance of each sensing solution while keeping the size to a minimum—allowing OEMs more freedom in design. Backed by years of precision design and manufacturing know-how, LG Innotek also brings advanced sensor fusion capabilities to the table—maximizing the strengths of each sensing modality. The result is a technology foundation that enables high detection reliability and real-time responsiveness, even under the most complex driving conditions.For robotaxis to become a truly mainstream fixture in urban life, better hardware alone won’t be enough. What’s needed is a complete evolution in sensing solutions—one that brings together data standardization, seamless AI integration, and end-to-end system optimization. These elements are critical for delivering the kind of stable, predictable performance required to navigate the unpredictable nature of city streets. With each step forward, robotaxis grow more capable of perceiving their environment and making complex decisions—redefining the boundaries of intelligent mobility and surpassing the limits of human perception.Building on its deep expertise across camera, Radar, and LiDAR technologies, LG Innotek is pushing the boundaries of autonomous driving through advanced sensor fusion—seamlessly integrating hardware and software into a cohesive sensing platform. With a complete portfolio of sensing solutions, scalable manufacturing capabilities, and design know-how tailored for autonomy, LG Innotek is expected to play a pivotal role as a key technology partner in the journey toward fully autonomous vehicles.