Back to case studies
Manufacturing What to evaluate

Digital Twin for Nonwoven Production

Product configuration trials reduced from 8–12 physical runs to 2–3; development cycle time cut by 60%.

SENSOR DATALine speedNeedle penetrationTemperature profilesCalender pressureWeight and tensiletestsDIGITAL TWINPhysics-informed ML modelCalibrated on historicalproduction runsParameter simulatorTarget spec to predicted outputmapOut-of-spec risk scoringConfidence ranges with adjustmentsOUTPUTSimulationinterfacePredicted output profileCompliance scoreParameter adjustmentsFewer physical trials

The challenge

New product development was gated by physical machine availability — the only way to test a configuration was to make it, which meant R&D competed directly with production for capacity.

The specific problem was twofold. First, the needlepunch process involves a large number of interdependent variables — needle density, penetration depth, fibre blend ratio, line speed, oven temperature profile, calender pressure — and the interactions between them are non-linear and poorly characterised by general models. Institutional knowledge lived in the heads of senior production engineers. When a new product was needed, the R&D process was essentially structured guesswork, refined by physical iteration. Second, the growing demand for glass fibre-reinforced hybrid constructions (which behave differently under thermal and mechanical stress than standard PET felt) meant that the empirical knowledge base was thin precisely where the highest-value product development was happening. Physical trials for hybrid configurations were consuming 15–20% of primary line capacity in development phases — visible as foregone production volume on every quarterly plan.

The system

Decision system built

A digital twin of the primary needlepunch production line was built by integrating three years of structured sensor data — line speed, needle penetration, temperature profiles, calender pressure, weight measurements at intervals — with a physics-informed ML model calibrated against the company's production history. The twin models the output characteristics (weight uniformity, tensile strength, dimensional stability, surface defect probability) of a given parameter configuration before any physical trial is run. A simulation interface allows R&D engineers to specify a target product configuration and receive a predicted output profile, flagging configurations likely to produce out-of-spec product and recommending parameter adjustments. Physical trials are reserved for the 2–3 configurations the model predicts will be closest to spec.

System components

01

Sensor data integration layer — three years of historical production data (line speed, needle parameters, temperature profiles, weight, tensile results) structured and ingested

02

Physics-informed ML model calibrated against production history for the specific needlepunch machinery configuration

03

Parameter configuration simulator — R&D interface for specifying product targets and receiving predicted output profiles

04

Out-of-spec probability scoring with recommended parameter adjustment outputs

05

Real-time synchronisation with active production sensor feeds — twin updates as production conditions change

06

Glass fibre-reinforced hybrid construction model layer (separately calibrated using reinforced product trial data)

How we worked

01

Engagement scope

Historical sensor data audit and structuring (3-year archive), physics-informed model architecture and calibration to specific machinery configuration, R&D simulation interface build, real-time sensor integration, glass fibre hybrid model calibration using available trial data, knowledge transfer and training for R&D and production engineering teams.

02

Timeline

Phased 6-month build: data structuring and model calibration in months 1–3, simulation interface and validation against known historical outcomes in months 4–5, live deployment and R&D team onboarding in month 6. Ongoing model refinement as new trial data is generated.

03

Operating model

R&D engineers operate the simulation interface independently. Model recalibration triggered quarterly using new physical trial data — the twin improves as the product portfolio expands. Production engineering team retains final sign-off on all configurations before physical validation trials are commissioned.

Outcomes

Business impact & measurable results

Product configuration trials reduced from 8–12 physical runs to 2–3; development cycle time cut by 60%.

01

Physical trial runs per new product configuration reduced from an average of 8–12 runs to 2–3 validation runs — a 75% reduction in physical development iterations

02

New gramage variant development cycle time cut by 60%, from an average of 14 weeks to 5–6 weeks from concept to production-ready specification

03

Primary line capacity recovered from R&D trial use — equivalent to approximately 8–10 additional production days per quarter

04

Glass fibre-reinforced hybrid product development accelerated significantly: first viable Orbond Plus-equivalent configuration achieved in 3 physical validation runs vs. 11 runs for the previous standard product development cycle

05

Institutional process knowledge formalised into a queryable model — reducing dependency on individual senior engineers and creating a transferable knowledge base for operator development

Governance

Trust, collaboration & governance

01

Model calibrated against the company's own production history — not a generic needlepunch model adapted from elsewhere

02

Prediction confidence scores surfaced to users — the system indicates where its predictions are reliable and where physical validation is non-negotiable

03

Physical validation trials retained as mandatory step before any new configuration enters production — the twin reduces trials, does not replace them

04

Full knowledge transfer: R&D team trained to run simulations, interpret outputs, and identify configurations where the model's training data is thin

Reframe

Bespoke gramage requests went from six weeks and ten trial runs to two weeks and a validated spec.

Across every engagement, the goal is the same: engineer a system that makes better decisions — faster, more consistently, and at scale — than the process it replaces.

Start a discovery

Most engagements begin with a conversation about context.

We do not send a proposal before we understand the problem. Start by telling us about your decision context — we will identify the highest-leverage intervention areas before any scope is agreed.