Tenbric

Catching waste. Saving cost.

Less compute.
More signal.

A monitoring approach calibrated from healthy operation. The same procedure works across compute, rotating machinery, electrochemistry, chemicals and thermal plant.

What it does

Catches drift, fouling, and degradation before threshold alarms fire. Tracks the mechanism behind it through to consequence.

How it works

Calibrate against fifty samples of healthy operation. The same procedure for any machine — turbofan, boiler, battery, GPU, plant. No fault labels, no retraining, no GPU required.

What you get

Less waste. Lower running cost. One monitoring stack covering compute, thermal, electrochemical and mechanical systems — instead of four separate tools.

Where the architecture has been tested

Six domains
flight 01

Turbofan engines

A hundred run-to-failure engines from NASA's CMAPSS dataset. Healthy baseline calibrated from the first fifty cycles of each engine.

100 of 100
Engines detected
54 cycles
Median lead time before failure
28 cycles
From first warning to escalation
Public benchmark — NASA CMAPSS, Saxena et al., PHM '08
wind_power 02

Wind turbines

Five testable events on real SCADA data from a Portuguese onshore wind farm. Load-invariant physics signals.

3 of 5
Testable events caught
+66h
Lead on hydraulic fault
+35h
Lead on generator bearing
Public benchmark — Fraunhofer IEE, EDP open data
local_fire_department 03

Hot-water boiler

A simulated commercial boiler plant across a full Chicago year. Three signals — gas-flow ratio, ΔT across boiler, setpoint error.

3 of 3
Fouling severities ranked correctly
Hour 48
Detection on 80% fouling
8,760 h
Healthy data routine year-round
Public benchmark — Lawrence Berkeley National Laboratory
battery_charging_full 04

Lithium-ion battery

Four LCO cells from the CALCE dataset. Capacity, internal resistance, charge-time deviation tracked together; ICA mechanism layer on every charge curve.

4 of 4
Cells detected before fade
214–259
Cycle actionable window
30 cycles
Mechanism leads consequence
Public benchmark — University of Maryland CALCE group
factory 05

Chemical process plant

Twenty-one fault scenarios in the Tennessee Eastman benchmark. Watching controller effort rather than process variables — the indirect signal.

21 of 21
Scenarios detected
78 min
Mean detection time
6 families
Fault types discriminated
Public benchmark — Downs & Vogel 1993, Manca 2020
memory 06

GPU compute

Twenty hours of real V100 telemetry from Oxford's Reveal dataset, thirteen ML workloads including BERT, ViT, LLaMA and Mistral.

+319%
Better than the best neural autoencoder
33× / 7.4×
Faster to train, fewer parameters
No GPU
Sub-millisecond inference, FPGA-deployable
Public dataset — University of Oxford, CC-BY 4.0

One commissioning procedure. Whatever the machine.

Detection before threshold alarms fire. A mechanism layer behind the detection. Lead time to act. The same architecture covers compute, thermal, electrochemical and mechanical systems — every part of a complex plant, under one stack.

What this doesn't yet prove

Each result above is on public benchmark data, not customer pilots. Known limits include signal coverage gaps where the relevant physics isn't sensed, sensitivity to operating regime where the baseline window doesn't span seasonal variation, and per-unit calibration as a non-negotiable rather than a one-size-fits-all model.