Every year, CES is a window into where technology is heading. It’s where the grand claims about AI and technology become concrete products you can see and touch. This year’s show, held in January 2026, made something very clear: the focus of AI development is moving from the purely digital realm into the physical world. After years of talking about artificial intelligence as something that existed primarily in software, we’re finally seeing it embedded in robots that move, work, and operate in real environments.
Physical AI isn’t just a buzzword. It represents a fundamental shift in how we’re thinking about intelligent systems. Instead of AI models running on servers, processing text or images, we’re talking about autonomous systems that have to interact with the unpredictability of the real world. A robot working in a factory has to navigate changing conditions, handle unexpected objects, and respond to novel situations. That’s exponentially more complex than a language model answering questions. And the implications are enormous.
For someone like me, working in venture capital focused on AI and technology, watching this shift happen in real time is fascinating. We’ve been waiting for this moment. We’ve known it was coming. And suddenly, it’s here. The hardware is catching up with the software capabilities. The applications are becoming real. The economics are starting to make sense.
Nvidia’s Rubin Platform: The Infrastructure Layer
The most significant announcement from CES 2026, from a technical perspective, was Nvidia’s Rubin platform. For those not deeply embedded in AI infrastructure, Nvidia has been the critical company enabling AI development. Their chips power most of the AI training and inference happening in the world. With Rubin, they’ve launched a suite of 6 new chips specifically designed for physical AI applications.
What makes this announcement significant is the scale of the improvement. The new Rubin platform achieves a 10x reduction in inference costs compared to previous generations. That’s not incremental improvement. That’s transformational. When you reduce the computational cost of running AI models by 90%, you fundamentally change what’s economically viable to deploy. Applications that were impossible because they’d be too expensive suddenly become plausible. Robots that would have cost millions to operate become affordable. Autonomous systems that required constant cloud connectivity can run locally and independently.
This is the infrastructure layer that makes physical AI scalable. You can build the best robot in the world, but if it costs millions per month to run the AI that controls it, it’s not going to be deployed at scale. When you can run sophisticated AI systems with a 90% cost reduction, everything changes. You can deploy thousands of robots instead of hundreds. You can afford to have them operating independently. The economics work differently.
From an investment perspective, this is exactly the kind of infrastructure advancement that creates opportunity. When the cost and complexity of deploying AI drops dramatically, it opens up entire new categories of applications. Companies that couldn’t afford to build with AI can now do so. The competitive dynamics shift. The winners and losers change.
Boston Dynamics Atlas: Field Testing Reality
If Nvidia’s announcement was about infrastructure, Boston Dynamics’ news at CES was about the practical frontier of what’s possible right now. Boston Dynamics’ Atlas robot has begun field testing at a Hyundai manufacturing plant. That’s not a controlled environment. That’s a real factory. Real conditions. Real work. And the implications are significant.
Boston Dynamics has been building impressive robots for years. They’ve released videos that go viral—robots doing backflips, opening doors, coordinating with each other. But there’s often a gap between impressive demo and practical application. Field testing at a manufacturing plant changes that. It means the robot is actually being put to work in a real environment where it needs to deliver value. It’s not about the spectacle. It’s about whether it can actually do useful work that improves productivity.
What’s particularly interesting about Atlas being deployed at Hyundai is the choice of manufacturing as the proving ground. Manufacturing is where robots have historically been most successful, but it’s also incredibly demanding. You have precision requirements, safety considerations, coordination with human workers, and the need for genuine reliability. If Atlas can prove itself in this environment, it validates the approach Boston Dynamics has been taking. And it means the path from experimental robotics to commercial deployment is shortening.
This is significant for the broader physical AI ecosystem because it shows that the gap between research and application is closing. We’re moving from ‘what’s theoretically possible’ to ‘what can we actually deploy and make money with.’ That’s when things get interesting from a market perspective.
The Broader Robotics Landscape
Beyond the headline announcements from Nvidia and Boston Dynamics, CES 2026 showcased the breadth of activity in robotics and autonomous systems. There were humanoid robots, mobile manipulators, autonomous delivery systems, and countless applications of AI to physical tasks. The ecosystem is diverse and rapidly evolving.
What struck me across all these announcements was how much progress has been made in making robots more dexterous, more intelligent, and more capable of handling unstructured environments. Robotics has historically been confined to very structured settings—manufacturing lines where everything is predictable, movements are repetitive, and variation is minimal. The frontier now is robots that can operate in unstructured environments like warehouses, construction sites, farms, and homes. These are much harder problems because they require real-time perception, decision-making, and adaptation.
The convergence of improved AI models, better hardware, more capable sensors, and more sophisticated control systems is creating a genuinely new technological frontier. We’re at a point where the engineering problems are solvable. The economic constraints are loosening. The applications are becoming clearer. The timeline for meaningful deployment is shortening dramatically.
What This Means for Labour and Society
I want to be honest about what physical AI and robotics advancement means for the labour market. It means significant disruption. Certain jobs are going to disappear. Manufacturing roles that are repetitive and rule-based will be automated. Warehousing, logistics, delivery—these sectors are all in scope for robotics. This is real change that will affect real people.
I’m not someone who believes automation is inherently good or bad. It’s a tool, and like all tools, it can be used well or poorly. But I think we need to have honest conversations about what’s coming. We need to prepare. We need to think about retraining, about social safety nets, about how wealth created by automation is distributed. Because these systems will create tremendous value. The question is who captures that value and what happens to the people displaced.
From a business perspective, I see massive opportunity. From a human perspective, I see both opportunity and risk. The companies that figure out how to deploy these systems responsibly, that invest in their workforces, that think about the broader impact—those are the ones that will build sustainable, defensible businesses. Quick optimisations that destroy communities are ultimately short-sighted.
Investment Implications
From the perspective of Nexatech Ventures, the announcements at CES 2026 validate the bet we’ve been making on physical AI and hardware. The infrastructure layer is becoming increasingly sophisticated. The applications are becoming real. The timeline is compressing. This means it’s a good moment to be investing in companies that are building in this space.
What we’re looking for at Nexatech are companies that have three things: first, a genuine technological advantage—whether that’s a better algorithm, more efficient hardware, or a novel approach to a hard problem. Second, a clear path to market and revenue. We’ve learned that being technically interesting isn’t enough. You need to be solving a problem that someone will pay for. Third, teams that understand both the technical challenges and the practical realities of deploying physical systems in the real world.
The robotics companies that will succeed over the next five years are those that can bridge the gap between cutting-edge research and practical deployment. They’re the ones that understand manufacturing, or logistics, or construction well enough to know what actually matters. They’re the ones that can work with customers to refine their approach. They’re the ones that think about reliability and safety and cost of deployment, not just capability.
The Hardware Challenge
One thing that struck me at CES this year was how much the hardware side matters. There’s sometimes a tendency in tech to focus entirely on software and algorithms, to assume that hardware is a commodity that will eventually become cheap and abundant. But in robotics and physical AI, the hardware is genuinely difficult and genuinely important.
You need actuators that are strong enough, efficient enough, and durable enough for real work. You need sensors that can reliably perceive the environment. You need power systems that provide enough energy for extended operation. You need structural materials that can handle the stresses of physical interaction. These aren’t trivial problems. And they’re not just about throwing money at them. They require deep expertise, iteration, and often novel approaches.
This is why I’m excited about the hardware companies working on these problems. They’re not crowded fields like large language models are becoming. There’s genuine innovation happening. There’s real differentiation possible. And there’s a clear path to value because if you solve a hardware problem, you’ve done something that can’t be easily replicated or downloaded.
The Integration Challenge
Beyond individual breakthroughs in AI models or robotics hardware, what CES 2026 highlighted is the challenge of integration. You need AI models, hardware, sensors, software stacks, and control systems all working together seamlessly. You need systems that are reliable, that can be deployed at scale, that can be maintained and updated.
The companies that will win in physical AI aren’t necessarily the ones with the most sophisticated AI models. They’re the ones that can assemble all these pieces into a coherent system and make it work reliably in the real world. That’s why we see established companies like Hyundai getting involved—they have manufacturing expertise, supply chain capability, and customer relationships. They can take robotics technology and actually deploy it.
For startups, this is both opportunity and challenge. The opportunity is that you can focus on solving one specific problem really well. The challenge is that you need to integrate with broader systems, work with established companies or integrators, and ultimately create something that’s deployable at scale. The winning strategy often isn’t to build everything yourself. It’s to build something critically important and then partner effectively.
Timeline for Real Impact
One thing I’m frequently asked is: when will this actually matter? When will robots be everywhere? When will AI-driven physical systems be commonplace? The answer is: sooner than most people think, later than the optimists claim.
Based on what I saw at CES and what we’re seeing in our own portfolio, I’d expect significant deployment of physical AI systems in manufacturing, logistics, and agriculture over the next 24-36 months. These are domains with clear economic incentives, relatively structured environments, and customers with the capital and sophistication to deploy new systems. Construction and home robotics will take longer—probably 3-5 years for meaningful scale—because they’re more complex environments and the economics are less clear.
The trajectory is clear, though. The pace of progress is accelerating. The infrastructure is becoming more capable and more affordable. The applications are becoming real. We’re not in speculative territory anymore. This is happening.
Looking Forward
CES 2026 felt like a moment where the conversation around AI and robotics shifted. We’ve moved from ‘when will this be possible’ to ‘what should we build and who will build it.’ The technical barriers are loosening. The economic case is becoming clearer. The winners and losers are starting to separate.
For investors, this is an incredibly interesting time. For business leaders, the question is whether your industry is in scope for physical AI automation and, if so, how you position yourself. For workers and society, it’s a time to think carefully about how we want these technologies deployed and what we want the impact to be.
What excited me most at CES wasn’t any individual announcement. It was the maturity of the entire ecosystem. We’ve moved from experimental robotics to a genuine industry. There are real products, real deployments, real companies building real value. The frontier is shifting. And for those of us who’ve been betting on this shift, it’s validating to see it finally arriving.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.