Hospital GroupsBengaluruPublished 29 Nov 2025

How a south India hospital network reduced evening spillover across three surgical hubs

A three-site surgical network used shared KPI definitions and live schedule control to reduce avoidable evening spillover without imposing one identical local workflow on every campus.

Overtime lists / week

9

from 26

KPI variance across sites

8%

from 37%

Dashboard adoption

92%

from 0%

How a south India hospital network reduced evening spillover across three surgical hubs

Leadership quote

For the first time, site heads could disagree honestly while still looking at the same operating truth.

A. Menon · Group Chief Operating Officer

The hospital group in this case managed three high-volume surgical hubs, each with its own local culture, specialty mix, and way of describing performance. Corporate leadership wanted a network view, but site teams were understandably wary of standardization that ignored local reality. Evening spillover became the issue that forced a change. Some sites were regularly running late yet defending those evenings as complexity. Others insisted they were more disciplined but used definitions that could not be compared cleanly across the network.

Context

Network leadership did not need a centralized command-and-control fantasy. It needed comparability. If one site called a session late only after 7:30 PM and another flagged overtime at 6:15 PM, group reporting became political rather than useful. The same was true for start discipline, idle time, and cancellation logic. The metrics existed, but they did not mean the same thing from campus to campus.

This inconsistency made intervention difficult. When one site head argued that overtime reflected premium case mix and another blamed weak daily control, the group could not easily determine whether either claim was correct. Without shared definitions and shared visibility, network governance remained stuck in narrative mode.

Operational baseline

Each campus had strong local operators, but the control methods were fragmented. One site depended heavily on the judgment of a senior OT in-charge. Another site used spreadsheets layered on top of its HMIS export. A third site had reasonable analytics but weak live reshuffling discipline. All three had pockets of excellence, and all three lost time through uneven mornings, poorly modeled duration drift, or slow recovery once the day started moving.

The group team was especially frustrated by evening spillover because it created staffing strain, surgeon dissatisfaction, and commercial opacity across the network. Yet no one could quantify how much of the late running was clinically inevitable and how much reflected weak sequence quality. That uncertainty kept the issue alive far longer than it should have.

Diagnosis

The first step was not optimization. It was definition. The network aligned on what constituted overtime exposure, first-case success, idle time, block quality, and recoverable versus unavoidable delay. Only after those definitions were shared could the group compare sites in a way that felt fair. That process alone reduced defensiveness because local teams stopped feeling that they were being judged by invisible or shifting standards.

Once the measures were normalized, the pattern became clearer. Evening spillover was concentrated in predictable combinations of weak first-case discipline, inconsistent mid-day reshuffling, and service lines where duration assumptions had not kept pace with real operating behavior. Some late sessions were indeed complexity-driven. Many were not. The network finally had the evidence to separate the two.

Rollout approach

The group chose a federated model. Each site would keep its local workflow nuances, but all sites would use the same KPI language, the same schedule-control principles, and the same review cadence. Site teams first ran shadow comparisons against the common control layer so they could evaluate recommendations without surrendering autonomy. That mattered politically and operationally.

Once trust improved, the network adopted a shared governance pack. Local OT teams used it for weekly review. Site heads used it for monthly performance management. Group leadership used it to compare where late running reflected complexity and where it reflected addressable control weakness. The intent was not to create a leaderboard. It was to create clarity.

Network changes that reduced spillover

  • All sites adopted one definition of overtime exposure, first-case reliability, and block quality
  • Shadow runs allowed local teams to compare existing choices with the new control logic before changing live practice
  • Mid-day reshuffling decisions were evaluated against downstream consequences instead of being left to isolated room judgment
  • Service lines with the highest variance received targeted duration-model improvement first
  • Group reviews focused on repeatable patterns rather than site-level anecdotes or blame

What changed operationally

The immediate benefit was that meetings became more useful. Site leaders could still challenge assumptions, but they were no longer arguing over which data was real. OT teams also saw practical value because the live control view reduced the need to improvise in isolation when a morning delay threatened the afternoon. Recovery decisions became faster and more consistent.

Equally important, the network learned where standardization should stop. Shared KPI language did not require identical local staffing patterns or identical escalation etiquette. Sites retained their own operating character while still contributing to a network view that leadership could trust. That balance is what allowed adoption to spread instead of being resisted as a head-office mandate.

Measurable outcomes

Across the three hubs, avoidable overtime lists fell from twenty-six per week to nine. KPI variance across sites narrowed from thirty-seven percent to eight because teams were finally measuring the same behaviors the same way. Dashboard adoption reached ninety-two percent across the intended leadership and operating audience, a useful sign that the reporting layer was becoming relevant instead of decorative.

The network also gained a better basis for investment. Leadership could see which sites needed stronger first-case governance, which services needed better duration modeling, and which rooms were mostly healthy but suffering from a few repeated bottlenecks. That made resource allocation more credible than broad corporate messaging about efficiency ever had.

Leadership takeaway

The group's biggest lesson was that network governance is impossible without definitional trust. Once that trust existed, meaningful optimization became possible. Before that, even good ideas stalled because every site feared being misunderstood or unfairly compared.

The project also showed that group-level control does not require flattening local expertise. The strongest network model let sites keep judgment where judgment mattered while enforcing common language where comparison mattered. That distinction was what turned the rollout from a reporting exercise into a usable management system.

What happened next

With overtime now easier to interpret, the group moved on to deeper questions about service-line design, capacity planning, and where shared best practices could be replicated most effectively. The initial governance work made those later decisions far more grounded than they would have been in a narrative-only environment.

Hospital profile

Hospital: Illustrative hospital network control program, Bengaluru

OR footprint: 18 ORs

HMIS: Mixed HMIS stack

Segment: Hospital Groups

Want your own outcome story?

We’ll review your current OR performance, quantify the recoverable opportunity, and show which ORS AI modules should lead the rollout.

Book a free OR audit