Google Beam hands-on: The most lifelike 3D video calling that didn’t completely impress me
Google Beam's 3D video call felt natural but had notable tech flaws.

Google Beam, introduced at Google I/O 2025, is the successor to Project Starline, aiming to deliver ultra-realistic 3D video calling. It uses light field display technology, six high-resolution cameras, and AI to render a lifelike 3D presence of a person during a call—without needing a headset or glasses. The system projects depth, natural lighting, and spatial audio, attempting to mimic face-to-face interaction.
In hands-on demos, the experience was described as visually impressive—the 3D person on the other side of the call appeared strikingly real. Eye contact and head movement translated well, which made interactions feel more personal than traditional video chats. The technology seemed especially promising for remote work, healthcare consultations, and distant family conversations.
However, reviewers noted key limitations. The lifelike illusion breaks down if the viewer moves off-center, and the 3D projection becomes noticeably distorted. Minor jitter and lag in rendering also occurred, suggesting the need for better real-time processing or hardware calibration. These hiccups affect immersion and consistency.
Google Beam's setup is also quite bulky, requiring a dedicated booth with lighting control and precise camera angles. That limits how accessible or scalable the product is for home or casual users. The company hopes to refine the system for broader deployment, but it currently targets business and institutional applications.
In summary, Google Beam is a technical marvel that offers a glimpse into the future of communication, but it's not quite ready for everyday use. It stands out for its realism and ambition, yet still feels like a demo-phase product that needs polish before mass adoption.
Sources: The Verge, Gizmodo, Engadget, CNET, TechCrunch