Why neuromorphic computing can't evolve alone

Open-source, interoperability, and hybrid hardware are shaping the path from research to real-world systems

Why neuromorphic computing can't evolve alone

Open-source, interoperability, and hybrid hardware are shaping the path from research to real-world systems

By Petruț Antoniu Bogdan


Last month, Open Neuromorphic brought together key contributors to the advancement of the neuromorphic field, from simulation frameworks to hardware and data tooling.

The session, titled “Open-Source Neuromorphic Research Infrastructure: A Community Panel,” hosted a range of expert voices working at different layers of the neuromorphic stack. The conversation was very frank, but friendly. Below, I’ve gathered my key takeaways and reflections, along with what we’re doing at Innatera to help move the field forward. 

This wasn’t just a technical exchange; it was a moment of alignment across the full neuromorphic landscape. From abstract neural modeling to physical hardware and real-world data pipelines, the session surfaced the field’s shared pain points and opportunities.

The conversation was structured across three tightly linked domains:

  • Simulation frameworks
  • Hardware deployment and interoperability
  • Data infrastructure 

These are interdependent layers that shape how neuromorphic systems are designed and how neural networks are trained and deployed to these systems. What made the discussion so valuable was the diversity of perspectives ranging from software to hardware, datasets to interoperability software, and research to commercial. 

Consensus pointed towards progress being blocked by fragmentation, lack of shared standards, and the weight of reinvention. And yet, a central goal was shared throughout the panel: build infrastructure that’s open, adaptable, and collaborative. Why? Because the future of neuromorphic computing won’t be solved in isolation.

Here’s what’s critical in shaping the industry: 

1.Collaboration is no longer optional

There are multiple communities – academic, industrial-led – that are all circling the same conclusion: we need to work together to move forward. From training challenges to hardware deployment, panelists echoed my belief that efforts like the Edge AI Foundation’s Neuromorphic Working Group, STANCE, and THOR aren’t isolated; they're starting to converge. The opportunity now is to align goals across scale, not fragment further.

2.Computational frameworks must bridge research and deployment

The panel highlighted a range of simulation environments – NengoBindsNETGeNNRockpoolBrian2 – each designed for different levels of abstraction and use cases. The real opportunity lies in making these tools interoperable and easier to use. From the commercial perspective, the goal is to lower the technical bar: you shouldn't need a PhD to build and deploy SNNs. Of course, from a research perspective, these tools are extendable to enable more complex simulations. In other words, all tools offer an accessible initial strategy that helps build foundations and intuition with reasonable performance KPIs. For more in-depth solution optimization, the strategy can be enhanced with non-standard implementations. 

As a small aside, I am personally passionate about video games and I can see some (admittedly very loose) parallels between game design and the creation of tooling for training and deploying signal processing pipelines involving machine learning - I'm not formally trained in UX, so maybe this is already clear in that community. Far from advocating for the gamification of these engineering tools, I'm advocating for tools to advertise good, generic "initial strategies" in their published list of examples and tutorials so that users develop intuitive and technical understanding of how to use technology that seems exotic and can most quickly lead to overpowered or optimal strategies down the road.

3.Open-source isn’t just academic, it’s operational

Projects like Faery aren’t sitting in GitHub repositories; they’re deployed on high-altitude balloons and even on the ISS. These tools have real-world utility, but the community still relies heavily on underfunded contributors and student work, so long-term sustainability is still a major gap. Funding, career paths, and shared responsibility are needed to keep this momentum alive. It’s wonderful to see that contributions to these projects can take place independently of a robust framework, for example, through the ONM hacking hours initiative. So one can only imagine how much more progress could be made with such a framework in place.

4.Mixed-signal hardware is critical for ultra-low-power systems

The value of analog and mixed-signal neuromorphics goes beyond theoretical. In power-constrained environments like implantable electronics, it’s the only viable path forward. Several panelists reinforced that the future is likely not about choosing analog or digital; it’s about hybrid systems that adapt to the application. This is a perspective that I’m personally quite happy to see shared, as the system we’ve built at Innatera has the precise characteristics described here. 

DOING OUR PART

At Innatera, we’re building toward the future the panel called for. Here’s how we’re mapping to the earlier points: 

1.Enabling cross-community collaboration

We’re contributing directly to the Edge AI Foundation’s Neuromorphic Working Group, which brings together academia and industry to address interoperability, benchmarking, and deployment for real-world edge systems. We also collaborate with parallel efforts like THOR, ensuring that solutions scale across devices, edge, and cloud. The first event organised as part of the working group took place at the Edge AI Foundation’s Milan event and focused on highlighting the challenges and opportunities towards commercialization of neuromorphic technologies and maturing of the open source community.

2.Lowering the barrier to hybrid pipelines

We’re developing heterogeneous signal processing pipelines that integrate Spiking Neural Networks (SNNs) into broader edge-AI stacks. Whether combined with CNNs or DSPs, these workflows are designed to be developer-friendly; no PhD required. Our tools aim to abstract the complexity of neuromorphic processing so engineers can focus on building, training, and deployment, not debugging. This is done through the creation of entire signal processing pipelines in Talamo (our PyTorch-based high-level training and simulation tool), which then can, in certain cases, be converted and compiled down to application code to execute on device.

3.Supporting open-source for real-world impact

We’re proud to sponsor open source initiatives directly (more details to follow). And through our involvement in the Edge AI Foundation, we’re backing work on benchmarking with Neurobench, therefore striving towards making neuromorphic models portable, testable, and reproducible. 

4.Delivering flexible, hybrid and heterogeneous neuromorphic hardware

Our Pulsar chip was designed from the ground up to support both analog-mixed signal and digital SNNs, enabling developers to explore and deploy the right model for their power, latency, and application constraints. This flexibility is key to unlocking use cases from consumer wearables to implantable healthcare devices.

Brains consist of inherently specialised units and regions; this organisation parallels heterogeneous systems, such as Pulsar's. As researchers and practitioners are figuring out how to perform more performant sensor fusion, complex signal processing pipelines are created that can be deployed in real world applications only when efficiently orchestrating specialised subsystems. 

WHERE WE GO FROM HERE

Multiple communities are arriving at the same conclusion: we need to work together. Interoperability is not uniform, and it doesn’t come with the price of sacrificing innovation. 

If we want neuromorphic computing to go from research to real-world systems, we need to lower technical barriers, fund the infrastructure, and collaborate across boundaries. 

This is where the future of truly smart technology is heading. And we’re all in.