Solar is tripling. Onshore wind is doubling. Demand flexibility is growing fourfold by 2030. The distribution grid was not built for this, and the models that govern every connection offer, flexibility contract, and reinforcement decision are showing it. We are building the mathematical framework to quantify what those models do and do not know.
Request early accessEvery connection offer, every flexibility contract, every reinforcement investment depends on a network model. Those models were built for a conventional, stable network. The grid is no longer that.
DNOs apply worst-case margins rather than risk blackouts on untrustworthy models. Connection queues stretch years. Flexibility markets cannot clear confidently. Every reinforcement investment is over-engineered. The problem compounds as renewables grow.
The distribution grid is hosting loads it was never designed for: DER, EVs, data centres, flexible demand. Source: NESO Clean Power 2030 Report, November 2024.
Connection offers, flexibility contracts, reinforcement investments, hosting capacity maps: all built on network models that were not designed for this level of complexity.
We have heard this from dozens of practitioners across DNOs, DSOs, and consultancies. Renewables and unprecedented loads are breaking the assumptions baked in.
The tools the industry uses today, deterministic power flow, worst-case static headroom, manual CIM audits, cannot quantify what they do not know. The answer is not a better map. It is a principled mathematical framework for inferring what the grid's measurements actually imply about its true state.
More certainty about where grid models hold and where they fail would unlock the visibility needed to connect renewables faster and run flexible markets safely. Quantifying that certainty has a known solution in applied maths: Bayesian model calibration.
Low voltage grids are enormous. Naive methods are computationally intractable. We are deploying treewidth-optimised graph decomposition to analyse where your model is shaky.
DNOs and TSOs do not just need parameter estimates: they need to know if a branch is missing, a tap setting is wrong, a sensor is faulty. We handle continuous and discrete error hypotheses within the same probabilistic framework.
Map of where the CIM diverges from reality
Probabilistic headroom per substation per year
Safe capacity bounds under model uncertainty
Confidence intervals for flex market clearing
Connection risk as a quantified probability for infrastructure capital
Portfolio-level site risk for IPPs and infrastructure funds
Infrastructure funds, project finance lenders, and large IPPs making site allocation decisions need to know which connections are realistic and when, not just what the queue says. Published headroom figures are built on worst-case static assumptions that bear little relation to how the grid actually behaves.
Loom Light quantifies connection risk as a probability. We run calibration of distribution network models against real SCADA measurements, revealing where actual capacity diverges from published figures and where else model uncertainty may bite.
A single corrected site selection or investment decision pays for the additional probabilistic analysis.
Replace the binary "queue position" with a probability distribution over connection timing. Model connection risk into IRR calculations and debt covenants with actual numbers.
Screen dozens of sites simultaneously against calibrated network models. Identify which have genuine near-term connection probability and which are stuck behind a structural bottleneck, before committing capital.
Our analysis is built on the same public CIM and SCADA data your engineers use, run through a principled mathematical framework. No relationships required. No weeks of waiting.
Lenders structuring debt covenants around connection timelines need a quantified risk view. We provide an auditable, evidence-based assessment that survives technical due diligence.
Connecting distributed generation, publishing hosting capacity, procuring flexibility, justifying decisions under RIIO-ED2: these all depend on model accuracy in ways that traditional network planning did not.
When the network was passive, conservative margins covered for model imperfections comfortably. We know how much progress has been made in good models built on clean data in the past decade. Knowing exactly where data is still shaky and what the benefit of improving visibility of the low voltage grid would be, will connect data improvements to quantifiable error and cost margins.
We are building a calibration layer that sits alongside your existing systems, works with data you already have, and produces outputs your engineers can assess and act on.
A probabilistic map of where your CIM model diverges from measured reality: which parameters are well-identified and which are not, and what type of error is most likely responsible.
Hosting capacity figures with calibrated uncertainty bounds, not just worst-case numbers. More reliable capacity maps mean fewer rejected applications that should have been approved.
Quantify model uncertainty so you can size flexibility decisions against evidence rather than rules of thumb. Safe capacity bounds that reflect what your measurements actually imply.
Identify which measurement locations would reduce model uncertainty the most, so each new LV monitor is placed where it delivers the greatest value to inference.
We are sharing working prototypes with a small group of network engineers, infrastructure investors, and energy professionals who care about model accuracy.
Sign up to try the tools, or reach out if you want to discuss a data partnership or design partnership.
We'll send prototype access to your email shortly.
The core challenge, parameter estimation from indirect observations under uncertainty, is an applied mathematics and computational complexity problem. The industry has been approaching it with power systems tools. We are not. We are bringing the world's best maths to the problem.
Leonie started her career solving the Schrödinger equation so accurately that computations were often too large for classical computers. That fascination with hard computational problems took her from a PhD in quantum chemistry to the editorial desk at Nature, where she handled research across the physical sciences as Senior Editor. She then moved into deep tech product leadership: first as CPO at quantum computing startup Riverlane, then as VP Product at nPlan, where she built probabilistic intelligence products for infrastructure, forecasting outcomes on some of the world's largest construction and energy projects. That experience taught her what it takes to turn rigorous mathematics into tools that real engineers trust and buy. She founded Loom Light because the electricity grid deserves the same.
Deepanshu has spent his career proving what computers fundamentally can and cannot do. After studying mathematics at IIT Bombay, he completed a PhD in computational complexity theory at the University of Toronto, where he established new fundamental limits on algorithms for core graph problems. He is now a postdoctoral research associate in Computer Science at the University of Cambridge, working on algebraic aspects of computation. His research sits at the frontier of theoretical computer science, combinatorics, and their connections to other areas of mathematics. At Loom Light, he's channelling that rigour into a different kind of network problem: Bringing provably sound mathematical methods to power grid modelling, where the gap between what models assume and what physics demands has real consequences.