• 936-336-5652

    Posts by Dan Eaves, PE, CSE

    HMI/SCADA Screen Design: Layout Standards that Boost Operator Response

    In oil and gas, an operator can go from “all good” to “shut it down now” in seconds. At that moment, the HMI or SCADA screen isn’t decoration, it’s the operator’s lifeline.

    A clean, structured display helps them spot an abnormal condition, understand it, and act before it becomes downtime, equipment damage, or a safety event.

    A cluttered display does the opposite: it slows recognition, buries the real problem, and adds stress right when clarity matters most.

    Measuring Optimal Screen Layouts

    This is measurable. High-performance HMI programs have been shown to help operators detect abnormal situations more than five times faster and intervene in time to prevent escalation far more consistently.

    Those gains don’t come from prettier graphics. They come from disciplined layout, predictable navigation, and clear visual hierarchy rooted in how people actually see and think.

    Standards such as ISA-101, NUREG-0700, and ISO 11064 exist for exactly that reason. 

    ISA-101, for example, focuses on consistency, navigation, alarm clarity, and situational awareness so operators can read plant status at a glance and respond correctly under pressure.

    These aren’t academic guidelines. They’re field-tested practices that have improved reliability and reduced incident rates across process industries.

    This guide breaks down the core layout principles you need to apply if you want your HMI/SCADA screens to function as decision tools, not just data wallpaper.

    How Operators Actually Process Screen Information

    Humans don’t scan a page like a spreadsheet. Vision works in two stages.

    Stage 1: Preattentive Processing

    First is “preattentive” processing — the brain sweeps the screen and flags something that looks wrong (a color shift, an odd shape, a bar out of range) in a few hundred milliseconds. Research in visual cognition puts that early sweep in roughly the 200–500 millisecond window, before we’re even fully aware we’re looking. After that comes focused attention, which is slower and takes mental effort.

    Good HMI design uses that biology. True problems should stand out instantly, even in peripheral vision. Routine values should fade into the background until needed. When every value is loud, nothing is urgent. That forces operators to spend energy sorting noise from signals instead of deciding what to do.

    Stage 2: Cognitive Load

    Cognitive load is the second limiter. People can only process so much at once. Once you blow past that limit performance doesn’t just dip slightly. For example, dumping fifty unrelated tags, flashing colors, and stacked trends into one view.

    It collapses. Modern “high-performance HMI” layouts respond to that by using muted graphics, limited color, and exception-based displays instead of “everything everywhere.”

    Measuring Whether a Screen Works

    If you want better performance, track it. The most useful commissioning and operations KPIs for HMI/SCADA screens are:

    • Recognition time — How fast does the operator notice that something is wrong? For top-tier consoles, the goal for a critical alarm is under about three seconds.
    • Diagnosis time — How fast can they explain what’s happening and why? Well-structured overview screens with proper context routinely get operators to a working mental model in under 30 seconds for common upsets.
    • Action initiation time — How long before corrective action starts?
    • First-time success — Did they choose the correct response without guessing or bouncing through five menus?

    Plants that shorten recognition/diagnosis time and increase first-time success see fewer unplanned slowdowns and fewer “near miss” safety events. Those improvements show up as shorter abnormal events and less time running equipment outside ideal limits — which is real money.

    Operator fatigue is another metric worth watching. Screens that make people hunt, squint, and cross-reference constantly drive fatigue, and fatigue drives mistakes. An HMI that reduces searching and makes state obvious at a glance is a reliability tool, not a cosmetic upgrade.

    High-Performance HMI and ISA-101: The Mindset

    Classic SCADA graphics tried to recreate a P&ID on the screen: every pump, every line, every tag. High-performance HMI takes the opposite view. Operators don’t need every detail all the time. They need to know: Are we healthy? If not, where’s the problem? What should I do next?

    ISA-101 formalizes that approach. The standard calls for HMIs that support situational awareness — the operator can tell what state the system is in, predict where it’s headed, and access the right control fast enough to intervene. Three themes show up again and again:

    1. Clarity. Graphics are intentionally simple, often grayscale, so real problems pop instead of drowning in gradients and 3-D art.
    2. Consistency. Navigation, alarm areas, and key status indicators live in the same place on every screen and across process units.
    3. Progressive disclosure. High-level overviews show overall health. From there, the operator drills down (unit → train → equipment) to get detail only when it’s actually needed.

    This mindset is about cognitive efficiency, not “nicer look and feel.” You are preserving the operator’s attention for judgment instead of forcing them to search for controls.

    Practical Layout Rules You Should Be Applying

    Screen Zoning

    ISA-101 encourages standard screen zoning. Put the most critical live status (pressures, levels, flows, permissives, trips) front and center, usually top-middle, because eye-tracking work shows that’s where operators naturally look first. 

    Secondary data like controller states, supporting KPIs, and short trends sit nearby but do not compete for attention. Navigation and common actions occupy fixed bands along an edge (often top or left) so nobody has to hunt for them in a crisis.

    Once that frame is set, copy it everywhere. The benefit is predictability. During an upset, the operator’s eyes and hands already “know” the layout.

    Information Hierarchy

    Not all data is equally urgent. Treat it that way.

    • Primary information is anything tied to safety, regulatory compliance, or staying online. It earns prime placement, bigger text, and the strongest visual cues.
    • Secondary information is used to optimize production, energy use, or product quality. Keep it visible but calmer.
    • Tertiary information is diagnostic context. It should be one click away, not stuffed onto the main overview.

    This hierarchy stops the “wall of numbers” effect and lets the operator build a reliable mental checklist: look here first for safety, here second for throughput, here third for troubleshooting.

    Alarm and Event Visibility

    Alarms should live in a dedicated, always-visible band — often along the bottom or side — instead of floating over graphics. Critical trip conditions and standing alarms can’t be allowed to hide behind popups. The alarm band is not decoration; it’s a reserved lane for “act now” information.

    Equally important: alarm navigation and acknowledgment controls belong in the same place on every screen. Muscle memory matters when the room is loud, people are talking, and time is tight.

    White Space Isn’t Wasted Space

    One of the fastest ways to sabotage an interface is to pack every inch with numbers and mini-trends. Overdense screens force operators to mentally sort and group information before they can even think about action.

    Give the eye breathing room. Use spacing and subtle grouping (borders, light shaded panels, proximity) to show which values belong together. White space is how an operator instantly reads “these four values are one subsystem” without having to stop and decode.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Visual Design That Helps Instead of Hurts

    Color: Treat It Like a Siren, Not Wallpaper

    High-performance HMI guidelines intentionally limit color. Red is “danger/emergency,” yellow is “abnormal/caution,” green is “normal/safe,” blue is “advisory.” Everything else stays neutral for two reasons:

    • If the whole screen is bright, nothing looks urgent.
    • Color alone is unreliable. 

    Roughly 8% of men (and a much smaller share of women) have red-green color vision deficiency, which means a pure “red vs. green” signal can literally vanish for them. Good screens back up color with shape, label, location, or icon.

    Also consider lighting and age. Control rooms are rarely perfect viewing environments. High contrast between text and background keeps values readable on older monitors, through glare, or under emergency lighting. Try using 7:1 or better.

    Typography and Icons

    Readable beats stylish. Use clean sans-serif fonts with clear shapes. Make sure normal operating values can be read from a typical console distance without leaning in, and make critical states larger and bolder.

    Keep the typography rules consistent across the system so “critical,” “normal,” and “caution” always look the same wherever they appear.

    For symbols, stick to standard ISA / ISO process icons so a pump looks like a pump and a valve looks like a valve no matter who’s on shift.

    Build an icon library, document it, and require everyone to use it. Consistency here pays off in faster recognition and fewer “what am I looking at?” questions.

    Navigation and Information Architecture

    Think like an operator, not like a PLC rack. Menus should mirror how the plant is actually run: “Train A Overview,” “Compression,” “Produced Water Handling,” “Flare,” etc., not just controller names. That keeps navigation aligned with real tasks.

    Within that structure, breadcrumb trails or on-screen location indicators tell the operator, “You’re in Injection Pumps → Pump B Detail.” That orientation matters when they’re drilling down fast to troubleshoot a trip.

    High-risk actions need direct, consistent access; like shutdowns, bypasses, and permissive overrides. Do not bury them three clicks deep in a submenu that looks different in each unit. That’s how hesitation and mis-clicks happen during high-adrenaline moments.

    Make It Stick Across the Entire System

    A style guide is non-negotiable if you want long-term consistency. It should lock in:

    Screen zoning (what sits where)

    • Color and alarm priority rules
    • Font families and sizes
    • Icon library and naming
    • Navigation layout and breadcrumb rules
    • Alarm banner behavior and acknowledgment path

    Treat that guide like any other engineering standard, not an optional branding exercise.

    Test With Real Operators Before You Roll Out

    Before you call a design “done,” hand it to actual console operators and run realistic scenarios. Include both steady-state work (normal operation, minor adjustment) and stress situations (pressure spike, trip, environmental exceedance). Time how long it takes them to notice the problem, explain what’s happening, and start the right response.

    Capture the numbers: recognition time, navigation errors, first-time success rate. Compare new screens to legacy ones. The plants that close this loop — design, test, adjust — see smoother startups, faster abnormal response, and fewer commissioning surprises.

    Your Next Moves

    Screen layout is not cosmetic. It is an operational control layer. The difference between a disciplined, ISA-101 style layout and a legacy “copy of the P&ID” graphic shows up in abnormal event duration, alarm floods, and stress on the person keeping the asset online.

    The path forward is straightforward:

    1. Audit what you currently display. Where are the alarms? Where does the eye land first? Where are people wasting time hunting?
    2. Write or update your HMI style guide. Lock in zoning, colors, typography, icons, navigation, and alarm behavior.
    3. Pilot the new layout on one unit or console. Train the operators who will live with it. Measure before/after performance.
    4. Roll out in phases, keep gathering feedback, and treat operator input like instrumentation data — objective and actionable.

    Do this and your HMI stops being a noisy wall of data. It becomes what it should be: a fast, reliable decision aid that helps the operator keep the plant safe, compliant, and productive — even on the worst day of the year.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more

    The EPC Path for Cryogenic Gas Plants: From Concept to Startup

    Cryogenic gas processing sits in a category of its own. These plants operate at temperatures well below -150°F (-101°C), and many core steps push toward the boiling point of liquid nitrogen at roughly -320°F (-196°C). 

    There are various hazards in those conditions: exposed skin can freeze almost instantly, common alloys turn brittle, and equipment has to survive violent thermal swings.

    Building facilities that chill natural gas into liquefied natural gas (LNG), generate liquid nitrogen, or recover natural gas liquids (NGLs) isn’t just technically hard. However, it’s capital-intensive, tightly regulated, and a single mistake can cost millions.

    Why Start a Cryogenic Plant?

    Why do it? Because cryogenic plants do things nothing else can. LNG facilities cool pipeline gas to about -260°F (-162°C), shrinking it to roughly 1/600 of its normal volume so it can be shipped worldwide and later regasified for pipeline use.

    Air separation units (ASUs) distill atmospheric air into oxygen, nitrogen, and argon. That oxygen may supply hospitals, while the nitrogen feeds semiconductor fabs — which expect purity in the 99.99% to 99.999% range.

    NGL recovery units pull ethane, propane, butane, and heavier hydrocarbons out of raw gas streams and turn them into petrochemical feedstocks, heating fuels, and blending components.

    In other words, these plants keep energy logistics, medical supply chains, and advanced manufacturing moving.

    High Stakes, High Commitment

    Because the stakes are that high, most owners don’t split design, procurement, and construction across multiple firms and hope it all fits together. 

    They use an EPC delivery model: Engineering, Procurement, and Construction; services delivered by PLC Construction.

    So one party owns the entire path from concept through startup and can be held accountable for performance.

    That EPC path covers five phases:

    1. Concept development
    2. Front-End Engineering Design (FEED)
    3. Detailed engineering and procurement
    4. Construction and installation
    5. Commissioning and startup

    Getting each phase right is the difference between a plant that starts cleanly and one that needs months of rework.

    Why EPC Matters More for Cryogenic Plants

    In conventional gas projects, you can sometimes hand off design to one firm, buy equipment through another, and let a third party build it.

    In cryogenic service, there are dangers. At cryogenic temperatures:

    • Carbon steel that behaves fine at ambient can fracture like glass.
    • Seals that work warm can leak when cycled cold.
    • Piping grows and shrinks as systems warm for maintenance and then plunge back to operating temperature.

    If design, purchasing, fabrication, and installation sit with different groups, small mismatches can surface only at startup. 

    EPC tightens that chain

    One team sets requirements, vets vendors, oversees fabrication tolerances, manages site installation, and then has to prove the plant runs. That continuity is why EPC is standard for LNG trains, ASUs, and similar deep-cold systems.

    Phase 1: Concept Development and Feasibility

    This first phase answers the blunt questions before major capital is committed.

    • Is there durable market demand?
    • Can we permit and build where we want?
    • Will the economics survive changes in feedstock or power cost?

    Market analysis is essential to understand the underlying requirements.

    For LNG, can you lock in long-term offtake?

    For industrial gases, do nearby end users — hospitals, fabs, refineries, chemical plants — have enough steady demand to justify capacity?

    A design that works in a Gulf Coast energy hub with dense pipeline access and cheap power may fail in a region with limited infrastructure.

    Site selection becomes a discipline of its own. Cryogenic plants need:

    • Pipeline or product tie-ins
    • Serious electrical capacity
    • Heavy-haul access for oversize equipment
    • Physical stand-off from neighborhoods
    • Local emergency response familiar with cryogens

    That last point is not optional. Cryogenic liquids like nitrogen boil at roughly -320°F (-196°C), expand rapidly as they warm, and can displace breathable oxygen, creating an asphyxiation hazard in enclosed or low-lying areas. 

    First responders and plant operators both need to understand how fast an oxygen-deficient atmosphere can develop.

    Product specification also gets locked here. A 500-ton-per-day oxygen plant is not just a scaled-up 50-ton unit. It’s different compressors, different power demand, different logistics. 

    Purity targets drive cost. Supplying generic nitrogen for pipeline purging is one thing. Supplying ultra-high-purity nitrogen to a semiconductor fab — which treats contamination as a line-stopping event — is something else entirely.

    Permitting starts in this phase, too. Environmental impact assessments, air permits, noise studies, dispersion modeling. Those timelines are measured in months, sometimes quarters.

    Waiting until detailed design to talk to regulators is how you lose a year.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Phase 2: Front-End Engineering Design (FEED)

    FEED turns a commercial idea into an engineering definition. Process engineers simulate refrigeration cycles, column performance, compression stages, and energy balances. The goal is to lock in a flow scheme that can actually deliver capacity and purity at a realistic power load, not just on paper but in steel.

    Key FEED outputs include:

    • Process flow diagrams (PFDs)
    • Piping and instrumentation diagrams (P&IDs)
    • Preliminary equipment data sheets
    • Plot plans and layout concepts
    • An early project schedule and cost estimate tied to real equipment, not guesses

    For cryogenic plants, FEED also runs structured safety reviews. 

    HAZOP teams spend hours walking “what if?” paths:

    • What if a valve fails during cooldown?
    • What if power dips mid-restart?
    • What if nitrogen leaks into an enclosed work area and drives oxygen below safe breathing levels?

    By the end of FEED, owners usually make the final go/no-go decision. After that, the project stops being “an idea we’re studying” and becomes “an asset we’re going to build.”

    Phase 3: Detailed Engineering and Procurement

    Detailed engineering takes FEED and turns it into construction documents. Every pipe size, cable run, support, foundation, and instrument loop gets defined. At cryogenic temperatures, those details decide whether the plant runs.

    Thermal movement is a big one. A line that’s 100 feet long at ambient can pull in by several inches once it’s cooled toward cryogenic service.

    If you don’t design flexibility the line can crack or tear away from its supports. This is where expansion loops, spring supports that allow movement, and materials that stay ductile in the cold are taken into account. 

    Structures and platforms also have to handle shifting loads as systems fill, empty, warm, and cool.

    Procurement in this phase often dictates the schedule. Cryogenic hardware is not commodity gear. Cold boxes, plate-fin heat exchangers, and large cryogenic compressors come from a short list of qualified manufacturers worldwide.

    Lead times can run 12–24 months, and the quality requirements are unforgiving: welds get 100% radiography, helium leak tests are standard, and low-temperature impact toughness must be proven, not assumed.

    Miss a spec and you’re not just slipping startup, you might be arguing warranty instead of making a product.

    The EPC contractor’s job here is to lock in suppliers early, audit their shops, and align deliveries with the construction plan. Getting a 200-ton cold box six months late can idle an otherwise finished site.

    Phase 4: Construction and Installation

    Field work on a cryogenic plant is not typical pipe-rack construction. Crews pour deeper foundations and install insulation barriers because cryogenic service can supercool surrounding soil and damage supports. Anchor bolts and supports use alloys selected to stay ductile at very low temperatures, not snap like glass. Rigging plans for cold boxes read like surgery notes because a dented shell can ruin vacuum integrity and cripple performance.

    Where possible, teams lean on modularization. Skids — pumps, valves, instruments, wiring — are assembled and tested in controlled shop conditions, then shipped and tied in. That improves quality, reduces weather risk, and shortens field work.

    Safety training during construction reflects cryogenic realities, not just generic industrial hazards. Workers are trained on two critical risks:

    1. Oxygen displacement. Boiled-off nitrogen or similar gases can drive oxygen below safe breathing levels without obvious warning.
    2. Extreme cold. Liquid nitrogen around -320°F (-196°C) can cause instant frostbite, and materials embrittled by cold can fracture violently. hvcc.edu+1

    Before introducing any cryogenic fluids, the team runs mechanical completion checks: pressure tests, vacuum integrity checks, helium leak tests, and full instrument loop checks back to the control system. Finding a wiring or torque issue now is cheap. Finding it during cooldown is not.

    Phase 5: Commissioning and Startup

    Commissioning is when the site stops being a construction project and becomes an operating asset. Systems are brought online in a controlled sequence, instrumentation is proven, and operators rehearse logic, shutdowns, and emergency responses.

    Cooldown is the make-or-break step. You cannot shock-freeze a cryogenic unit. Temperature has to drop in a controlled profile so metal, welds, seals, and instruments adjust without cracking.

    Operators watch temperature points across exchangers, columns, and piping, looking for uneven cooling that hints at blockage or trapped moisture. Rushing this stage is how expensive equipment gets damaged before the first product ever ships.

    After cooldown, performance testing answers the only question that really matters:

    • Does the unit hit purity and capacity at the energy use predicted in FEED?
    • If not, is the issue tuning, or is it mechanical?

    Startup teams also clean up instrumentation quirks like transmitter drift and incorrect level readings at deep-cold temperatures.

    What Separates Successful Cryogenic EPC Projects

    Certain patterns show up again and again on projects that start up cleanly and stay reliable:

    Experienced people. Engineers who’ve designed multiple cryogenic units catch material and flexibility problems early.

    Construction managers who’ve set cold boxes know the difference between “good enough” and “this will fail at -260°F.” Operators who have brought similar systems down and back up treat cooldown like a controlled sequence, not a race.

    Early risk work. Technical risk (Can this vendor really meet cryogenic specs?), schedule risk (Are long-lead items ordered in time?), and operational risk (Do we have procedures and training for oxygen-deficient atmospheres and extreme cold exposure?) get attention from day one.

    Tight communication. Weekly project reviews clear clashes before they turn into rework. Regular updates with regulators keep permits moving. 

    Local outreach matters too; emergency responders need to understand what a cryogenic release looks like and why oxygen displacement is so dangerous.

    Technology, Compliance, and the Finish Line

    Most owners lean toward proven core technology because long-term reliability usually beats a tiny efficiency gain nobody has field-proven at scale. Digital layers, though, are no longer optional.

    Predictive analytics spot performance drift in compressors and cold boxes before it becomes downtime. Advanced process control helps hold purity and throughput on spec. Remote monitoring lets senior specialists support field operators in real time.

    Compliance is not paperwork theater. Industry codes like ASME Section VIII (pressure vessels), API 620 (low-temperature tanks), and NFPA 55 (compressed gases and cryogenic fluids) define safe design, storage, handling, and oxygen-deficiency prevention for cryogenic and compressed gas systems. 

    Ignoring those standards isn’t just illegal; it invites denied permits, higher insurance exposure, and preventable safety incidents. Environmental requirements around air emissions, vented gas, noise, and stormwater also have to be designed in early; bolting them on late is how schedules fall apart.

    Bringing It Together

    Cryogenic gas plants are unforgiving. You’re dealing with fluids near -260°F to -320°F (-162°C to -196°C), handling products whose volume, purity, and reliability underpin global energy logistics, hospital supply chains, and modern semiconductor manufacturing. The margin for error is thin.

    The five-phase EPC path gives structure to that risk: Concept → FEED → Detailed Engineering & Procurement → Construction → Commissioning/Startup.

    The winners are the teams that treat each step with discipline: realistic siting and permitting, serious FEED work that bakes in safety and operability, early procurement of long-lead cryogenic hardware, field execution that respects what extreme cold can do, and a controlled startup led by people who’ve done it before.

    Teams that follow that model don’t just reach the first product. They hand over plants that hit spec, run reliably, satisfy regulators, and protect people. In the cryogenic world, that’s the difference between an asset and a liability.

    PLC Construction provides complete EPC services, comes with decades of expertise, works with skilled and certified professionals, and has the experience to back it up.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more

    Remote Alarm Management with WIN-911: Building a Reliable Callout Strategy

    When a critical alarm goes unseen at 3 a.m., the shock waves hit safety, production, and compliance all at once. These aren’t abstract risks for oil and gas or heavy industry. They’re the sort of failures that turn into night-long shutdowns and week-long investigations.

    For 24/7 operations, adopting WIN-911 isn’t about convenience; it’s about keeping the plant monitored when no one is at the console.

    Understanding the Remote Alarm Problem

    Older models assumed a staffed control room, which left blind spots during shift changes, maintenance, storms, and evacuations. Modern plants are distributed: upstream pads and midstream stations spread across counties, specialists on the road, and experts on call.

    Meanwhile, SCADA can spew hundreds of alarms a day, burying the urgent inside the routine. Miss a high-pressure alarm and you can get a safety event. Miss a pump trip and you can be staring at $50,000 per hour in lost production, plus extended maintenance windows, emergency callouts, and repair costs.

    Remote alarm management fixes two things simultaneously: 

    • It routes the right event to the right person fast.
    • It confirms that person actually received it.

    This way, the plant can act within defined time windows instead of hope and luck.

    WIN-911 in a Nutshell

    WIN-911 is purpose-built for industrial notification. It sits next to your HMI/SCADA/DCS stack, connects through standard interfaces, and pushes alarms out over multiple channels: two-way voice (text-to-speech), SMS, email, and a mobile app. 

    That mix provides redundancy when a site has spotty coverage or noise that drowns out a ring. The system also supports hands-free voice acknowledgment so techs can confirm receipt without dropping tools. Useful when they’re in PPE or mid-task.

    On the data side, WIN-911 polls alarms from your control system via industrial connectors such as OPC Data Access (OPC DA), the long-standing standard for moving live tags between devices, SCADA, and HMIs.

    In short: an OPC server exposes items; client software (like an HMI or a notifier) reads, writes, and subscribes to change events.

    Because the platform is built for volume, it can filter, prioritize, and throttle floods so urgent events break through while nuisance noise gets parked.

    Audit trails and reports show who was notified, how they were notified, and when they acknowledged. Powerful evidence for investigations and continuous improvement.

    A Practical Framework for Callout Strategy

    Alarm Prioritization and Classes

    Start by separating critical safety from production and maintenance/informational alarms.

    Fire and gas, ESD trips, and equipment failures with immediate hazard get “all-hands” treatment; throughput and quality events still matter but allow measured response; condition-based alerts inform planning.

    This tiering protects focus and prevents fatigue.

    People and Escalation

    Define the human chain of custody.

    • Primary contacts are the operators and technicians with direct responsibility and authority.
    • Secondary contacts are supervisors or specialists who can authorize bigger moves.
    • Backup contacts cover nights, holidays, and call-ins.

    Route by skill and location so the person who can fix it first sees it first.

    Response Windows and SLAs

    Put numbers on it: how long before a critical alarm must be acknowledged (five–fifteen minutes), how long before a production alarm must be acknowledged (fifteen–thirty), and what happens if nobody responds.

    These SLAs anchor escalation timing and let you measure performance over time.

    WIN-911 Configuration: Getting the Basics Right

    Tactics and Sources

    Create tactics (WIN-911’s rulesets) that define which alarms to watch, how to format messages, and who gets what. 

    Connect to alarm sources (SCADA/PLC/DCS) over OPC DA or the platform’s supported interfaces; group people by real responsibility, not org chart alone.

    Prefer mobile numbers for first reach; use voice for noisy areas, SMS for weak-signal zones, and email for non-urgent, detail-heavy notices.

    Test every contact method up front and on a cadence; numbers drift, inboxes change, and cell towers do, too.

    Advanced Escalation

    Build multi-stage sequences. 

    1. Stage 1 goes to the equipment owner; no acknowledgment within the SLA triggers.
    2. Stage 2 (supervisor, controls engineer); a further miss escalates to
    3. Stage 3 (manager, on-call vendor).

    Use conditional routing so safety-critical alarms fan out immediately, while planned-maintenance alarms route to reduced lists. Integrate shifts, holidays, and outages so coverage follows the calendar automatically.

    Redundancy and Failover

    Eliminate single points of failure. Use redundant servers (or instances), UPS, secondary WAN paths (cellular or satellite for remote pads), and dual notification channels per person. Document a manual backup (who calls whom) for black-sky events.

    For cybersecurity and resilience, align your remote-access and notification architecture with ICS guidance from DHS/CISA (e.g., multifactor auth, segmentation, and least privilege for remote pathways).

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Implementation That Sticks

    Start with a Pilot: pick a narrow but meaningful slice, like high-risk safety systems or a troublesome compressor station. Keep the pilot team small and engaged (operators + one controls engineer + one supervisor).

    Define what “better” means: faster acknowledgments, fewer missed alarms, cleaner handoffs. Prove it, then scale.

    System Testing: run end-to-end drills: 

    SCADA alarm → WIN-911 delivery → human acknowledgment (voice, SMS, app) → SCADA reflects the acknowledgment.

    Test at shift change, at 2 a.m., and during planned outages. Load-test for upsets where hundreds of alarms arrive in minutes; you want to see filtering keep the console usable and the callout stack moving.

    Training and Adoption: train for both the tool (installing the app, voice ack, handling repeats) and the process (who owns what, when to escalate). 

    Provide short “first-five-minutes” cards and one-page flowcharts so new hires and contractors can follow the playbook under stress. Keep a FAQ handy for IT (ports, firewall rules, device enrollment).

    Staying Reliable Over Time

    Health Monitoring

    Watch server and gateway health, queue depths, and delivery success rates. Set meta-alerts for notification failures (e.g., a carrier outage or email server timeouts). Review monthly trends and downtime roots.

    People Data

    Audit contact data on a schedule—people change roles, numbers, and devices. Build automated checks (e.g., a weekly test ping) to catch stale entries before the 3 a.m. alarm finds a dead phone.

    Alarm Hygiene

    If operators are drowning, fix the upstream problem. EEMUA 191’s widely used benchmarks target manageable rates (e.g., alarm floods defined as >10 new alarms in 10 minutes) and encourage rationalization so real problems aren’t lost in noise.

    Use your reports to find bad actors, chattering points, and mis-prioritized events; fix setpoints or logic before they become cultural background noise.

    Security and Remote Access

    Remote notification lives in the same ecosystem as remote access. Apply standard ICS security practices: network segmentation, strong authentication, least privilege, monitored gateways, and maintenance of the mobile endpoints that receive alarms. 

    CISA’s practical playbook on remote access for ICS is a useful checklist for policy and design reviews.

    Scheduling That Mirrors Reality

    Tie WIN-911 schedules to actual rosters, rotations, holidays, and known maintenance windows so coverage is automatic.

    If a weekend crew is shorter, widen the Stage-1 distribution list; if a unit is down for PM, route its nuisance alarms to a parking group so they don’t page people unnecessarily.

    Keep an “operations override” for storms and turnarounds so supervisors can temporarily broaden notifications with a single switch.

    End-to-End Validation (Don’t Skip This)

    Before go-live, run table-top drills and live tests for each alarm class. Trigger sample alarms from SCADA and verify:

    • The payload is formatted clearly.
    • The right people get it on the channels you expect.
    • Acknowledgments close the loop in the control system.
    • Escalations trigger on time.

    Repeat the test during shift change and again at off-hours. Then perform a short flood test to make sure filtering rules prevent pile-ups while priority events still break through.

    Reporting and Audits

    Schedule monthly reports that show SLA performance, alarm volumes by class, top escalated events, and common failure points (bad numbers, full mailboxes, out-of-coverage cells).

    Use the audit trail when you review incidents: who was notified, by what path, and when did they acknowledge? Close the loop with a standing “alarm quality” meeting so maintenance and controls can correct chattering points and setpoint errors instead of normalizing the noise.

    Mobile Use in the Field

    Coach techs on when to acknowledge immediately (clear, actionable events) and when to hold the ack until they’ve verified local conditions. The goal isn’t to hit the button fast—it’s to confirm the right action is underway.

    Where coverage is weak, pair SMS with voice and let techs queue acknowledgments in the mobile app until connectivity returns. If your policy allows BYOD, apply MDM/MAM controls and require screen lock, encryption, and the ability to remote-wipe.

    Measuring What Matters

    Response analytics make the value visible:

    • Average time-to-acknowledge before and after the rollout.
    • Percent of alarms cleared within SLA
    • Escalation depth (how often Stage 2 or 3 gets invoked).

    Volume analytics point to upstream tuning: nuisance alarms, chattering points, and off-hours surges. 

    User feedback keeps things human: what messages were confusing, which alarms should be grouped, which voice prompts saved time.

    ROI comes from avoided incidents, shorter upsets, and cleaner handoffs. Faster acknowledgment shrinks equipment damage and production loss; cleaner callouts reduce overtime and drive fewer emergency call-ins. Often the program pays for itself in months, not years.

    Where Standards Fit

    Alarm management isn’t a blank sheet. ISA-18.2 defines the alarm-management life cycle: philosophy, identification, rationalization, implementation, operation, maintenance, monitoring, and audit.

    Your rulesets, KPIs, and reviews have a shared structure and vocabulary. If you’re aligning corporate policy, start here and tailor for your sites.

    For data movement, understand your plumbing. OPC DA is the classic, widely deployed mechanism many plants still rely on for HMI/SCADA connectivity; newer systems often add OPC UA for secure, modern connectivity.

    If you know how these clients and servers browse items, subscribe to changes, and handle quality/timestamps, you’ll troubleshoot integrations faster.

    Putting It All Together

    WIN-911 doesn’t fix culture by itself; it gives your culture a reliable nervous system. 

    Build a callout plan that mirrors how your people actually work, test it like you test your safety systems, and keep tuning both the alarms and the roster. 

    Start with a pilot, publish simple SLAs, and review the numbers every month. 

    When the 3 a.m. event hits, and they will, you’ll have a system that reaches the right person, gets a confirmed acknowledgment, and buys back the minutes that matter most.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more

    Produced-Water / SWD Controls: Automations that Cut Truck Wait & Opex

    Oil and gas operators face a simple but ugly truth: handling produced water can swallow a huge share of lease operating cost. In some shale basins it approaches half of total operating expense, while trucks sit in line at saltwater disposal (SWD) sites for 45–90 minutes or more.

    The scale is massive. U.S. wells generate tens of billions of barrels of produced water every year, with published estimates in the 15–24 billion barrel range. When that kind of volume backs up, disposal stops supporting work and starts dictating production. Water handling stops being a side task and becomes the constraint.

    Automation is how you break that choke point. Well-designed control systems cut truck wait time dramatically and drive double-digit drops in day-to-day operating cost.

    In our work across dozens of facilities, we’ve watched disposal sites move from constant firefighting to steady, predictable throughput in under a year. The common thread is not a gadget; it’s coordinated controls tied to live data and predictive logic.

    The Hidden Cost of Bottlenecks

    Truck Wait Time and Production Risk

    Those 45–90 minute delays at busy SWDs aren’t just driver complaints. During peak hours, that delay often stretches to two or three hours. If a truck can’t unload, water piles up at the well site. Now operations slow down or scramble for temporary storage, service crews miss their slot, and you start burning money to deal with water instead of producing hydrocarbons.

    Idling is its own tax.

    • Trucks burn fuel just sitting.
    • Drivers are on the clock without moving.
    • Equipment racks up wear with no revenue attached.

    The bigger hit comes upstream: if water haul-off can’t keep pace, production throttles whether you planned for it or not.

    Ongoing Opex Drag

    Traditional SWD facilities are labor intensive. Someone has to watch pressures, confirm injection rates, log volumes, and keep an eye on alarms 24/7. That means multiple shifts, overtime, and inevitable human error.

    Compliance work piles on top. Regulators expect accurate, timestamped injection volumes, pressure histories, and environmental data. Gathering and formatting that manually eats hours and exposes you to penalties if a number is off. 

    Because most sites still run reactively, critical pumps or valves tend to fail when you’re busiest. Forced downtime during peak trucking windows doesn’t just create repair cost — it backs up the entire chain.

    How SWD Automation Works

    Core Components

    Modern SWD automation ties sensors, PLCs, and HMIs into one coordinated control layer. 

    • High-accuracy flow meters track every barrel.
    • Continuous pressure monitoring keeps injection wells inside safe limits.
    • If something drifts, the system trims flow automatically; if something looks dangerous, interlocks shut it down before you scar a wellbore or cook a pump.

    All of that feeds live data to operations. Thousands of points update continuously, get stored automatically, and surface as clear status screens instead of scattered clipboards.

    Field staff get alerts on their phones. Supervisors can see multiple sites from a central control room instead of driving location to location.

    Integration with Existing Infrastructure

    You don’t have to rip out working hardware to modernize. Most control platforms speak Modbus, Ethernet/IP, OPC-UA, and other standard industrial protocols, so they can sit on top of existing SCADA and talk to the gear you already trust.

    Rollouts usually come in phases. 

    You start with visibility: automated metering, alarming, trending. 

    Then you hand limited control tasks to the system: flow control, routing, shutdown logic. 

    Once the team is comfortable, you turn on optimization. Wireless instrumentation helps here by cutting trenching, conduit, and long pull runs.

    For multi-site operators, centralized architecture matters. Standardizing logic across all SWDs gives consistent behavior while still letting each site reflect its own limits and geology. 

    Built-in redundancy and hardened cybersecurity support reliable low-latency communication, which SCADA networks depend on to move alarms and commands securely from remote sites to the control room.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Where Automation Pays Off

    Benefit 1: Shorter Truck Lines

    Smart scheduling and automated receiving logic attack the wait-time problem directly. The system looks at who’s inbound, current tank and injection capacity, and any maintenance holds, then staggers arrivals so trucks show up when the site can actually take fluid.

    Real-time control prevents choke points. Automated valves and routing shift flow to open capacity before a line forms. RFID or digital ticketing trims 5–15 minutes off each check-in by killing clipboard work and manual data entry.

    The last piece is uptime. Predictive maintenance tools monitor vibration, temperature, and load on pumps and injection equipment, then flag issues early.

    In heavy industrial environments, predictive maintenance programs routinely cut unplanned downtime by about 30–50% while extending equipment life 20–40%. Keeping the site online during peak haul windows is what actually drains the queue.

    Benefit 2: Lower Opex

    Staffing changes first. Sites that once required someone on-site around the clock can shift to remote monitoring plus targeted visits. That reduces overtime and fatigue without sacrificing awareness.

    Equipment lasts longer because it runs in its sweet spot instead of being hammered at extremes. Variable frequency drives and automated control logic keep pumps and valves where they want to live, which lowers energy use and slows wear.

    Maintenance moves from calendar-based (“every X hours whether it needs it or not”) to condition-based (“fix it when the data says performance is drifting”).

    Administrative overhead drops too, because the system has already logged the data regulators ask for.

    Implementing It Without Breaking Operations

    Plan With Real Numbers

    Start by capturing how your disposal network actually runs today: average unload time, queue length during frac peaks, injection pressure trends, after-hours callouts, reporting labor. That baseline becomes your business case.

    When you model savings, include more than headcount. Throughput gains, avoided emergency storage, and fewer unscheduled shutdowns often outrun payroll savings.

    Local rules matter. Produced-water volumes keep climbing, and some basins are tightening injection limits and pushing disposal costs sharply higher. Like in the Delaware sub-basin of the Permian, water cuts can reach roughly five barrels of water for every barrel of oil.

    Regulators have moved to cap disposal and redirect volumes, driving per-barrel handling costs up 20–30%. Engaging regulators early keeps you from redesigning twice.

    Choose partners who understand SWD, not just generic plant automation. You’re not just installing new boxes; you’re changing how water moves.

    Deploy in Phases

    A typical path looks like this: first, instrumentation and visibility; second, remote control and automated interlocks; third, optimization and automated scheduling. Each step earns trust before you add the next.

    Training is part of the rollout, not an afterthought. Crews need to know what the system will do on its own, what requires manual intervention, and how to override safely.

    Before you call the project done, test both in the shop and in the field. Factory Acceptance Testing proves the logic under controlled conditions. Site Acceptance Testing proves it with real pumps, real pressures, and real trucks.

    Proving the Value

    After automation, the pattern is consistent. Truck throughput climbs because arrivals are sequenced and unloads are faster. Emergency scrambling for temporary storage drops. Fewer people have to sit on-site all night just to watch gauges.

    Equipment availability improves because you’re fixing problems before they fail during peak demand. Those gains show up both in fewer headaches and in the monthly P&L.

    That matters, because produced-water management is already one of the most expensive, closely watched pieces of U.S. shale operations, and operators spend billions per year on hauling, treating, and disposing of this water. 

    Even small percentage improvements in uptime and throughput translate directly into cash.

    Looking Ahead

    The direction of travel is clear: more sensors at the edge, smarter analytics in the middle, and fewer manual decisions in the moment.

    Instead of reacting to alarms, teams are beginning to forecast stress:

    • Weather
    • Production ramps
    • Pipeline outages

    These shape water logistics ahead of time. That’s not hype; it’s the practical version of “digital oilfield.”

    Next Steps

    SWD automation isn’t experimental anymore. It delivers shorter truck lines, lower operating cost, tighter compliance, and faster payback when it’s scoped and rolled out correctly.

    The first move is simple: document where your disposal network is losing time or money right now. From there, you can map the control, monitoring, and reporting upgrades that remove those bottlenecks and build a phased plan that fits your risk tolerance and budget.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more

    Field Communications: Designing Reliable Radio/Tower Networks for SCADA

    Industrial operations live and die by their field communications. When a pipeline pressure sensor stops reporting or a remote wellhead drops off the network, production slows, safety margins shrink, and money burns. For operators spread across hundreds of miles, a dependable radio network is vital.

    SCADA doesn’t just send data. It moves two classes of traffic at once: critical control and everything else. 

    Critical signals, like shutdown commands, high-pressure alarms, pump and valve actions can’t lag. Many utilities design around end-to-end delays on the order of 100 milliseconds for SCADA control traffic because anything slower can let equipment damage or safety events develop.

    Bulk flows like trend logs, production reports, and maintenance records can tolerate delay. A solid network guarantees the first without starving the second.

    Now add field reality: remote sites with no commercial power, brutal temperature swings from 140°F sun to hard freeze, salt spray, vibration, and the constant need to lock down remote access against attackers who explicitly target industrial control systems.

    You don’t just have a networking problem. You have an operational risk problem. The rest of this guide walks through how to design radio and tower networks that still work when conditions are trying to break them.

    SCADA Communication Requirements

    Mixed Traffic, Mixed Urgency

    SCADA networks carry messages with very different timing expectations. Emergency shutdowns and trip conditions sit in the “late equals damage” category. These signals need immediate delivery and guaranteed execution. Delays become a risk.

    Then there’s slow data: historian tags, production totals, compliance snapshots. That information matters for reporting, optimization, and maintenance planning, but it doesn’t have to arrive this instant.

    The trick is to enforce Quality of Service (QoS) so control traffic rides in a priority lane and never fights with bulk uploads or dashboard refreshes.

    Bandwidth needs vary wildly. A single tank level transmitter might limp along at 9.6 kbps. Add high-frequency sensors, security video, and analytics, and you’re suddenly pushing megabits. Smart designers assume “extra” capacity will not stay extra. Growth always comes.

    Environmental and Operational Stress

    Remote compressor stations 80 miles from town don’t get fiber, and they’re not always inside reliable LTE footprints. Radio becomes the lifeline.

    Power is scarce. If the nearest utility feed is 20 miles away, radios and RTUs run on solar and batteries. That power system has to ride out a week of cloudy weather without dropping critical visibility.

    The environment tries to kill hardware. Cabinets cook in full sun, then freeze overnight. Salt air corrodes terminals. Pumps shake hardware loose. IP67-rated enclosures, conformal coating, gasketed connectors, and active temperature management aren’t “extras” — they’re what keep the link up.

    Radio Network Design Fundamentals

    Frequency Strategy: Licensed vs. Unlicensed

    Choosing a spectrum is a risk decision. Licensed spectrum costs more, but under FCC Part 101, fixed microwave users get coordinated, exclusive channels for their service area.

    That protection dramatically reduces interference and stabilizes high-capacity backbone links between key sites such as plants, compressor stations, and control centers.

    Unlicensed bands, most commonly 900 MHz ISM (902–928 MHz) and 2.4 GHz, are attractive because hardware is cheap and no license application is required. 

    Range at 900 MHz is excellent for wide oil, gas, and water systems, but those bands are crowded and channel choices are limited. Interference from other industrial users (and even consumer devices) can cause retries and dropped packets right when you need dependable control.

    The safe pattern is to put backbone and high-risk assets on licensed or protected links, then use unlicensed where occasional retries are acceptable.

    Path Analysis and Link Budgets

    RF doesn’t care about project deadlines. Line of sight is more than “I can see the other tower.” You also have to clear the Fresnel zone; the three-dimensional RF “bubble” between antennas.

    Standard practice is to keep at least 60% of the first Fresnel zone clear of trees, terrain, and structures so reflections don’t tear down link quality. Ignore that and you’ll watch reliability collapse during temperature swings or fog.

    After geometry comes math. A link budget starts with transmitter power and antenna gain, subtracts feedline and path losses, and compares what’s left to receiver sensitivity.

    The difference is a fade margin; your safety cushion against rain fade, ducting, fog, and general atmospheric weirdness. 

    Long microwave hops are often engineered for roughly 20 dB of fade margin to hit “four nines” (≈99.99%) style availability. Anything below ~15 dB on a mission-critical path is asking for middle-of-the-night outages.

    One last point: software models lie if the inputs are stale. Terrain changes. Trees grow. Metal structures appear. Always verify critical paths in the field (or by drone) before treating a model as gospel.

    Tower Infrastructure and Antenna Systems

    Tower Siting and Civil Work

    Where you drop a tower drives coverage, cost, and survivability. Hilltops give elevation and cleaner shots but invite lightning and access headaches. Valleys are easier for maintenance crews but may force taller structures just to clear terrain.

    Permitting rarely moves fast. Expect FCC coordination, FAA lighting/marking rules on taller structures, local zoning, environmental impact and cultural resource reviews, sometimes even endangered species surveys. Bake that time into the plan.

    Foundations and grounding are not “check the box” items. Soil conditions decide foundation type. Seismic and ice loading matter in certain regions. Proper grounding and lightning protection guard both equipment and people. Cutting corners here gets expensive or dangerous.

    Antennas, Alignment, and Diversity

    Antenna choice quietly shapes network behavior. Omnidirectional antennas make sense for a hub talking to lots of remotes in all directions. Directional antennas throw a tight beam downrange and are ideal for long point-to-point shots, but only if they’re aligned and mounted so they stay aligned.

    Diversity is cheap insurance. Space diversity (two receive antennas at different heights) and frequency diversity (sending the same data on two channels) both help ride through atmospheric fades that would wipe out a single antenna.

    Yes, it costs more. So does downtime.

    Install discipline matters. Polarization must match or you can lose 20 dB instantly. Every coax run needs real weatherproofing; water in a connector will kill an RF path faster than almost anything else. Mounts have to resist wind and vibration so they don’t slowly drift. Plan yearly inspections so you fix issues before they snowball.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Network Reliability and Security

    Redundancy and Failover

    Any single point of failure in SCADA comms will eventually fail. Good designs remove them. Critical sites should have a primary and a backup path, ideally via different towers, different frequency bands, or even different radio technologies. With automatic failover watching link health and switching in milliseconds.

    Geographic diversity is your safety net when lightning, power loss, or weather takes out an entire hub. If a main tower dies, a secondary hub 50 miles away should already be syncing data and able to assume control. Dual infrastructure costs money, but compare it to shutting in production or losing pipeline visibility.

    Continuous monitoring ties it together. Operations should see live signal strength, error rates, and throughput for each hop. Alarms should trigger upon degradation before users complain. Trend data helps you spot repeating weak points, like a microwave leg that always sags at sunrise.

    Cybersecurity in the Field

    Attackers actively go after SCADA and other industrial control systems, and U.S. agencies have warned about purpose-built malware that targets field devices, remote access paths, and engineering workstations. The response looks a lot like “defense in depth,” adapted for the field:

    • Segment networks so a compromise in corporate IT can’t walk straight into safety systems or compressor controls.
    • Control and log remote access instead of leaving standing always-on tunnels. Use strong authentication, preferably multi-factor.
    • Encrypt radio traffic (for example with modern AES-class encryption) so intercepted data isn’t immediately useful.
    • Assume hardware can be stolen. A missing field radio shouldn’t automatically have valid trust inside your network.

    Implementation and Commissioning

    Survey, Plan, Prove

    Paper designs meet dirt during site surveys. That “clean shot” on the drawing may clip a new utility line. The control shed might not have ventilation for radio gear. The access road may turn to mud every spring. Physical verification prevents expensive surprises.

    Integration planning saves you from finger-pointing during startup. 

    • Confirm radios speak the protocol your SCADA host expects.
    • Check voltage and load against the site’s actual power budget.
    • Bench-test full chains (radio → RTU/PLC → SCADA) before you send crews hundreds of miles to install.

    Schedules need margin. Hardware lead times slip, weather windows close suddenly, and key people get pulled to emergencies. Build slack and line up alternates for radios, antennas, and power gear.

    Test Like You Mean It

    Commissioning is about performance, not just “does it turn on.” Run bit-error-rate tests to verify data integrity. Measure throughput. Log receive levels, fade margin, and error counts at different times of day and in different weather.

    RF changes with temperature and humidity; capture that baseline now so you can compare six months later when someone claims “the link got worse.”

    Training closes the loop. Control room staff need to read link-health dashboards at 2 AM without guessing. Field techs need torque specs, inspection intervals, and troubleshooting flowcharts. Give them short job aids, not just a binder on a shelf.

    Sustaining the System

    Preventive Maintenance

    Most ugly outages trace back to skipped basics. 

    Batteries lose capacity predictably; test quarterly and replace before failure. Thermal cycling loosens RF connectors; re-torque annually. Dust kills solar performance; clean panels. A consistent checklist beats heroics later.

    Optimization and Growth

    Use performance data to tune, not just repair. Underused links can absorb new sites without a forklift upgrade. 

    Hops with a huge fade margin may safely run at lower power, extending gear life. Map expected expansion so you know where the next tower or repeater needs to go. Track bandwidth use so you can order capacity before congestion becomes downtime.

    Also plan refresh cycles; keeping 20-year-old radios alive only works until the last spare part disappears.

    Vendor and Technology Choices

    Specs don’t tell the whole story. Raw transmit power and receiver sensitivity matter, but mean time between failures, spares availability, and support response matter more over the 10-year life you’re actually buying. The “cheap” radio becomes expensive fast if it dies monthly and nobody can service it.

    When you compare costs, look at the total cost of ownership. Hardware might be 30% of the lifecycle. Add install labor, licensing (for protected spectrum), maintenance, truck rolls, downtime exposure, and power. Price the cost of going dark at 3 AM. That usually makes “premium” gear look pretty reasonable.

    You’re not just buying boxes. You’re picking who answers the phone in a storm, who stocks spares, and who trains your crew.

    Where This Is Headed

    Several trends are already reshaping field comms. Private 5G aims to give industrial users dedicated spectrum slices, predictable latency, and prioritized traffic — basically carrier-grade wireless without handing the keys to a public carrier.

    Mesh networks keep getting smarter and lower-power, making self-healing topologies practical even in huge, rough geographies. And edge analytics is starting to watch link health and hardware conditions in real time, flagging problems before humans notice.

    The Bottom Line

    Designing a SCADA radio/tower network is not “put up an antenna and hope.” 

    It’s engineering for latency, power limits, terrain, weather, interference, lightning, and people with bolt cutters.

    It’s planning redundancy so one broken tower doesn’t blind your whole system.

    It’s cybersecurity that assumes outsiders will try to get in. And it’s disciplined upkeep so the network you build on day one still works in year ten.

    At PLC Construction, we build these systems with safety, uptime, and maintainability front and center. We combine what’s proven: proper spectrum planning, hard infrastructure, layered security with emerging tools that make the network smarter over time.

    The payoff is simple: if you treat field communications like core infrastructure instead of afterthought wiring, the network will pay you back in reliability, safety, and flexibility for years.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more

    Designing for Electrical Safety: Arc-Flash Studies & Coordination Basics

    When a technician at a Texas refinery cracked open a motor control center in 2019, the arc flash that followed blew him roughly fifteen feet backward. He lived, but barely. 

    The blast was hot enough to scorch his skin instantly. This left him with third-degree burns over 40% of his body, months of treatment, and a long recovery. The company was left with $2.3 million in destroyed equipment, six weeks of downtime, and OSHA citations that could have been avoided with a proper arc-flash analysis.

    That’s the stakes. For oil and gas sites, petrochemical plants, and heavy industrial facilities, arc-flash protection isn’t “safety paperwork.” It’s core business protection. A serious event can take out people, hardware, production schedules, and insurance posture in a single hit.

    The smartest operators treat electrical safety studies as a reliability investment, not a compliance tax.

    Why Arc Flash Work Isn’t Optional

    Dangers to Humans

    Arc flash is not a minor spark. It’s an electrical explosion that can spike to roughly 35,000°F, hotter than the surface of the sun. Creating a pressure wave strong enough to rupture eardrums and physically launch an adult across a room.

    Safety organizations estimate tens of thousands of arc-flash incidents every year in the U.S., leading to thousands of burn injuries, thousands of hospital admissions, and hundreds of fatalities.

    Financial Impacts

    The financial hit can be brutal. One Louisiana petrochemical facility lost about $8 million after a 2018 arc flash wrecked its main electrical room, vaporized high-value switchgear in milliseconds, and froze production for two months.

    It’s not just repairing parts. It’s outage penalties, lost throughput, workers’ comp exposure, and long lead times on replacement gear.

    Regulation Citations

    Regulators are direct about responsibility. OSHA’s 29 CFR 1910.335 requires employers to identify electrical hazards, assess the risk, and protect workers with appropriate practices and protective equipment. 

    This includes PPE and face/eye protection where there’s a risk from electric arcs or flashes. If there’s a known arc-flash hazard, it’s on the employer to address it.

    NFPA 70E turns that duty into a process. It outlines how employers must build and maintain an electrical safety program: identify and assess electrical risks, apply safety-related work practices, and justify energized work through a documented energized work permit when de-energizing isn’t feasible.

    Arc Flash Risks Increase as the Voltage Does

    Arc-flash risk shows up anywhere you’ve got energized gear above 50 volts: main switchboards, MCCs, disconnects, panelboards, temporary tie-ins, even that “just for now” junction someone installed during a turnaround and never pulled back out. 

    The only way to treat that risk seriously is to map it, quantify it, and coordinate the protective devices so that when something fails, it fails safely.

    Arc-Flash Studies: Your System’s Reality Check

    Think of an arc-flash study as a full electrical risk map of your facility. Instead of blood pressure and cholesterol, you’re measuring incident energy and blast radius. 

    • Engineers model every point where a worker might be exposed to an energized part
    • Calculate how severe an arc event could be at that location.
    • Spell out what PPE and work practices are required.

    If you run equipment above 50 volts, you’re expected to have this documented. Insurers ask for current studies. Contractors expect to see them before they’ll quote energized work. Smart plant managers use them for planning outages, setting PPE rules, and controlling who’s allowed to open which covers. Without that map, you’re guessing every time someone cracks a door.

    A proper study doesn’t stop at the main switchgear. It starts at the utility service and follows power all the way down to the smallest breaker. 

    Engineers need to: 

    1. Build a digital model of the system.
    2. Calculate available fault current.
    3. Review breaker and relay settings.
    4. Determine incident energy for each piece of equipment.
    5. Generate arc-flash labels and safe-work boundaries.

    You walk away with updated one-lines, PPE tables, boundary distances, and task-specific procedures instead of guesswork.

    Most firms run these studies with dedicated tools such as SKM, ETAP, or EasyPower. Typical scope for a mid-size industrial facility runs a few weeks and costs in the tens of thousands of dollars. 

    Stack that against the multimillion-dollar downside of a single catastrophic event and the math is pretty simple.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Core Calculations That You Can’t Wing

    Incident Energy

    “Incident energy” is how much thermal energy hits a worker standing at a defined distance from an arc. It’s measured in calories per square centimeter (cal/cm²) and it drives PPE. 

    • Around 1.2 cal/cm², bare skin will develop a second-degree burn. 
    • Around 8 cal/cm², normal work clothes can ignite. 
    • By 40 cal/cm², even a heavy arc suit might not be enough to walk away without severe injury.

    Arc-Flash Boundaries

    From those energy calculations, you get the arc-flash boundary — the distance at which a person without arc-rated PPE could expect a second-degree burn if an event occurs.

    Sometimes that line is within 18 inches of a small panel. Sometimes it’s twenty feet out from high-energy switchgear. People need to know exactly where that line is before they open anything, and supervisors need to enforce it.

    Fault Current

    Fault current analysis shows how much current can actually flow during a short. High available fault current can mean a massive, violent arc. It can also mean breakers and relays trip faster, which limits how long the arc lasts.

    Picture a 480-volt panel that can see 30,000 amps. If the protective device clears the fault in a couple of cycles, you’ve limited exposure. If it hesitates, you’ve just created a blowtorch.

    Labels

    All of that information: voltage, incident energy, PPE category, arc-flash boundary gets turned into a physical label posted on the gear. 

    Those labels are not decoration. They’re instructions. Workers lean on them when deciding: 

    • Do I need a face shield?
    • Full hood?
    • Insulated gloves?
    • Can I even have this cover open while energized?

    Labels have to stay readable in the real environment, and they have to stay accurate as settings and system configurations evolve. Regular label audits catch drift before it turns into a false sense of security during energized work.

    Coordination Basics: Clearing the Fault Without Killing the Plant

    Protective Device Coordination

    Protective devices have to act in the right order. When something faults, you want the closest upstream protective device to trip first, not the main breaker feeding half the facility. That’s selective coordination. A motor starter breaker should clear its own motor fault. The feeder breaker should clear a feeder fault. The main should only go if everything downstream fails.

    Engineers model this with coordination studies that simulate thousands of fault scenarios. They compare breaker and relay curves, account for transformer inrush, motor starting behavior, cable heating limits, and tolerance bands in the protective devices themselves. That modeling exposes weak points long before a live test does.

    Here’s where coordination ties back to arc-flash risk. Faster clearing times mean lower incident energy at the point of the fault. Great for the worker. But ultra-fast “hair trigger” settings can also cause nuisance trips on normal events like a motor inrush, which can take down production. 

    To solve that, facilities lean on tools like zone-selective interlocking or temporary maintenance modes: aggressive protection while someone’s inside the gear, normal coordination the rest of the time.

    Time-Current Curves

    Time-current curves plot how each breaker or relay behaves. The x-axis is current. The y-axis is time. Each curve shows, “At this much fault current, I’ll trip in this many cycles.” When you stack curves for devices in series, you can literally see whether they coordinate or overlap.

    Good coordination demands daylight between curves so that downstream protection has first shot. Thi is often on the order of a few tenths of a second (roughly 0.2–0.4 seconds, or 2–3 cycles) at expected fault current levels. 

    If two curves overlap too tightly, you risk both devices tripping. If you separate them too much, you let the arc burn longer than it should. There’s judgment involved, especially in higher-voltage systems that need wider timing margins and in legacy gear that doesn’t behave like modern microprocessor relays.

    That’s why coordination reviews still require an experienced power engineer, even when the software plot looks perfect.

    How the Study Actually Gets Done

    Step one is data. Engineers need transformer kVA, impedance, and winding configuration. They need cable lengths, sizes, and insulation types. They need breaker model numbers and the exact pickup and delay settings. Missing or wrong data wrecks accuracy and leads to false confidence.

    Accurate one-line diagrams matter just as much. If your drawings don’t show that “temporary” MCC you tied in last turnaround, or the rental generator someone quietly made permanent, the model will lie to you. Out-of-date single lines are one of the fastest ways to understate real hazards.

    Utility data can be painful to get but is critical. Available fault current, system X/R, and upstream relay behavior at the service entrance all drive calculated arc energy downstream. If you guess, you’re gambling with real people.

    Once the model is built, engineers calculate incident energy at each bus, evaluate clearing times, and assign working distances. This is where IEEE 1584 comes in. IEEE 1584 gives a lab-tested, math-driven method for predicting arc current, arc duration, incident energy, and arc-flash boundary across common industrial voltage classes (roughly 208 V through 15 kV) using empirical formulas developed from extensive testing.

    Working distance is a quiet killer. The math might assume 18 inches at 480 V and 36 inches at 4.16 kV. But if the gear is jammed in a narrow electrical room and you physically can’t stand that far back, real exposure is higher than the model says. Good engineers sanity-check the software with field reality and spot-check critical breakers manually, especially older units that may not clear as fast as their original curves imply.

    Boundaries and Labels: Drawing the Line Between Safe and Unsafe

    The study output isn’t just numbers in a binder. You get defined approach boundaries.

    • The arc-flash boundary is where an unprotected person could receive a second-degree burn if an arc occurred.
    • The limited approach boundary and restricted approach boundary deal with shock and accidental contact. Limited keeps unqualified people away from energized conductors (think 42 inches at 480 V, jumping to about 10 feet at 13.8 kV). Restricted is the “you’d better have written authorization and the right gear” zone where an accidental slip could put your body in contact with live parts. Those distances are not suggestions — they’re work control lines.

    These boundaries go on durable equipment labels along with voltage, incident energy, required PPE category, and any special notes. Labels must survive the environment and remain readable, and they have to match what’s actually in the gear today, not what was there five years ago.

    Routine label reviews catch fading, damage, or changes in calculated values that creep in after system modifications or breaker setting tweaks. People can’t protect themselves from hazards they can’t see, and they can’t follow rules they can’t see either.

    The Standards That Run the Show

    NFPA 70E

    NFPA 70E is essentially the playbook for electrical work practices in U.S. facilities. It expects you to build and maintain an electrical safety program, perform arc-flash and shock risk assessments, define energized work procedures and permits, train “qualified” workers, and audit that the program is actually followed in the field.

    OSHA

    OSHA enforcement sits behind that. OSHA 29 CFR 1910.335 says employers have to assess electrical hazards and provide protective equipment and tools suited to the task, including arc-rated PPE and face/eye protection where there’s a risk from electric arcs or flashes. In plain terms: if you expose employees to an energized hazard, you’re responsible for knowing the hazard and mitigating it.

    IEEE 1584

    IEEE 1584 moved arc-flash assessment from “best guess” to engineering discipline. It gives the calculation methods for arc current, arc duration (based on protective device clearing times), incident energy at a defined working distance, and the arc-flash boundary. Those results drive labeling, PPE, and procedural controls. Facilities that haven’t been updated since older calculation methods often find their required PPE levels and safe distances change once they’re re-run under the current model.

    Mitigation: Engineering, Admin, PPE

    Engineering Controls

    • Arc-resistant switchgear is built to contain and redirect arc blast energy through reinforced housings and vented plenums, so the blast goes up and away from the operator instead of out the door.
    • Current-limiting fuses slash peak fault current by opening extremely fast, which reduces how violent the arc can get. You give up reset capability — a fuse is done after it operates — but you often gain a dramatic cut in incident energy.
    • Maintenance mode / instantaneous trip: many modern breakers include a temporary “maintenance” setting that removes intentional delay and trips almost instantly while technicians are working, then goes back to normal coordination for production.
    • Remote racking and remote switching let workers stand clear of the arc-flash boundary altogether. Motor operators, wireless controls, and fixed cameras mean you don’t have to be face-to-face with live gear to open or rack it.

    Administrative Controls and PPE

    • Energized work slows people down and forces the hard question: “Why can’t we de-energize?” The permit process often exposes safer alternatives to live work before anyone puts on a hood.
    • Arc-rated PPE selection starts with the incident energy on the label, but practicality matters. Can the person see, move, and breathe well enough to do the job without creating a new hazard? The highest category suit is not automatically the safest choice if it destroys dexterity.
    • Competency checks shouldn’t end in the classroom. Workers need to be observed actually donning the suit, staying outside restricted boundaries they’re not cleared to cross, and using insulated tools correctly.

    Condition-based maintenance helps too. Infrared scans, partial discharge testing, and targeted inspections reduce how often you have to open energized gear in the first place. Every avoided “hot” exposure is risk reduction.

    Bringing It All Together

    Arc-flash studies and protective device coordination aren’t academic paperwork exercises. They’re the backbone of an electrical safety program that protects people, limits outage blast radius, and keeps production from falling apart after one bad breaker event. They also create a common language between operations, maintenance, safety, and insurance: everyone can point to the same label and see the same risk.

    Treat the study as a living document, not a one-and-done binder. Update it when you add a big motor, swap a transformer, change utility service, or tweak breaker settings. Focus upgrades where energy is highest and human contact is most frequent — that 40 cal/cm² main switchboard deserves priority long before a 2 cal/cm² lighting panel.

    Finally, choose engineers who understand both math and plant reality. The cheapest study isn’t a bargain if it ignores how your people actually work, or if it hands you labels without explaining maintenance mode, switching procedures, and PPE strategy. You don’t need paperwork. You need a plan that people can execute tomorrow without getting hurt.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more

    Commissioning KPIs: Forecasting Startup Dates & Avoiding Rework

    When downtime burns thousands of dollars an hour, nobody can afford a messy handoff to operations. In some process plants it can spike into the hundreds of thousands.

    Commissioning is the stage where a project either turns into a running asset or stalls in costly limbo. Miss a startup date and you get lost production, emergency fixes, and tense calls with leadership. A focused set of commissioning KPIs turns that chaos into something you can manage. 

    At PLC Construction, we’ve watched disciplined KPI programs cut commissioning delays by about a third and drive rework almost to zero. The benefit doesn’t end at startup. Teams that learn to measure the right things once tend to repeat that behavior — and the wins — on every project after.

    What Commissioning KPIs Really Do

    Track Progress and Quality

    Commissioning KPIs are measurable signals that show how close you are to a safe, reliable startup. They track progress, quality, and readiness in real time instead of relying on “we’re close.” 

    Used correctly, they d

    There’s a financial reason this matters. Rework on large industrial and energy projects routinely eats 5–15% of total project cost and is a major driver of schedule slippage.

    Some studies tie rework to schedule overruns approaching 10% of planned duration. Cut that, and you protect both budget and schedule.

    Keeping Safety at the Forefront

    There’s also safety. KPI-driven commissioning forces you to prove out safety-critical systems, like shutdown logic, interlocks, and alarms before startup, under controlled conditions. Fewer surprises means fewer incidents.

    And once leadership can see objective status each week, credibility goes up. Clear, data-backed readiness is a lot easier to fund and defend than “we think we’re days away.”

    Why Startup Forecasting Matters


    Accurate startup dates drive production planning, staffing, logistics, and sales commitments. In oil and gas, timing can move market exposure by millions.

    A one-day slip can cost from tens of thousands to hundreds of thousands in deferred throughput and standby costs. Those losses compound fast when delays trigger contract penalties or keep contractors and rentals on site longer than planned.

    Missed dates also jam the rest of your portfolio. Commissioning talent and specialty tooling are shared. 

    When Project A slides, Projects B and C instantly lose critical people and equipment. After a couple of blown “go-live” calls, executives stop believing any forecast. Once that trust is gone, the project team spends more time defending timelines than executing.

    Why Forecasting Is Hard

    Commissioning isn’t just “the last bit of construction.” It’s live systems, real product, and tight regulatory and safety constraints.

    Common blockers show up fast:

    • Incomplete turnover. Teams walk in and find missing documentation, open punch items, or systems that weren’t really finished. That work now sits on the critical path.
    • Iterative testing. Commissioning is: test → find issue → fix → retest. Those loops eat days, and most schedules underestimate them.
    • External dependencies. Vendor reps, regulatory witnesses, and weather windows can all stall progress. Classic Gantt charts usually pretend those factors are controllable. They aren’t.

    When forecasting ignores these realities, production hires early, sales promises volumes, and operations stages feedstock for a startup that doesn’t happen. The scramble that follows burns morale and money.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    The KPIs That Matter

    You do not need 40 charts. Strong commissioning KPI frameworks stay focused on three buckets: schedule, quality, and resources.

    Schedule / Timeline KPIs

    Schedule Performance Index (SPI) for commissioning phases

    SPI compares earned progress to planned progress. Unlike simple “percent complete,” it asks whether the work is actually accepted to standard. An SPI drifting below 1.0 is an early warning that you’re sliding, even if everyone is still saying “we’re fine.”

    Milestone hit rate

    This tracks what share of defined commissioning milestones land on or before their target date. Commissioning work is tightly linked — when a prerequisite slips, five downstream tasks stall. A falling hit rate tells you where the bottleneck is building.

    Critical path stability

    This KPI watches the handful of activities now dictating the forecast startup date. If the critical path keeps jumping between systems, risk is high. Stable critical paths are easier to staff and defend.

    Quality / Rework Prevention KPIs

    First-pass acceptance

    How many loops, subsystems, or functional tests pass the first time with no fix needed? Weak first-pass rates flag systemic issues (incomplete procedures, calibration problems, bad vendor packages) and predict schedule drag.

    Defect density

    Track punch items or defects per unit of work turned over and trend it. A spike in one system tells you exactly where to focus supervision, vendor support, or QA.

    Rework hours vs. total hours

    Rework time is pure drag. Across capital projects, direct and indirect rework commonly sits in the 5–15% cost range and drives measurable schedule overruns. Watching this ratio live tells you whether you’re burning time fixing yesterday’s work instead of moving toward startup.

    Working in quality control in construction could lead up to 40% in reduction of rework.

    Resource / Efficiency KPIs

    Resource allocation accuracy

    Compare the people and tooling you planned for this week with who actually showed up and what they actually worked on. Gaps here explain schedule misses before they show up in SPI.

    Cross-discipline coordination

    Measure how often mechanical, electrical, controls, and operations sign off on a system without dispute. Most ugly delays come from last-minute “wait, that’s not what we agreed.” This KPI makes that visible.

    Vendor / contractor readiness

    Track vendor response time, documentation quality, and ability to support field testing when called. If a supplier can’t show up or can’t produce final packages, your startup date is at risk whether you like it or not.

    Cutting Rework Before It Hits the Schedule

    Rework is the silent killer. On heavy industrial work, it regularly lands between 5% and 10% of total cost and, in bad cases, reaches double digits. 

    It also drags timelines; some analyses tie rework to nearly 10% extensions in planned schedules. Every hour spent tearing something back apart is an hour not moving toward startup.

    The best defense is readiness.

    Before you call Day One of commissioning:

    • Mechanical completion needs to be real, not optimistic. Loops closed, punch list under control, redlines captured.
    • Documentation must match reality — P&IDs, loop sheets, cause-and-effect matrices, procedures. Missing or outdated docs are one of the fastest paths to rework.
    • Roles and sign-off authority must be clear. If nobody knows who can accept a system, you stall.

    During execution, smart teams build playbooks for known pain points.

    They ask, “Where did we lose days last time?” and stage spares, vendor techs, and test gear before those issues resurface.

    They also schedule formal quality checkpoints — structured holds where leads confirm work meets standard before anything gets energized. Catching a wiring error before power-up is cheap. Finding it after a breaker trips is not.

    Building a KPI Framework People Will Actually Use

    Rolling out KPIs is half tooling, half culture. If the dashboard feels like punishment or extra admin work, it will die fast.

    Keep dashboards visual and blunt. Use color to flag trouble areas, trends to show whether you’re improving, and exception reporting to highlight the handful of items that truly threaten startup.

    Automate data capture wherever possible. Pull status from project controls, test forms, and turnover databases so field crews aren’t double-entering numbers. KPIs that depend on manual reporting tend to vanish when people get busy.

    And keep the list short. High-performing groups usually monitor a focused set of core indicators tied directly to business objectives, not dozens of vanity metrics.

    Research on KPI governance shows that simple, aligned, transparent KPIs drive better behavior than bloated scorecards. More KPIs does not equal more control — just more noise.

    Keeping KPIs Useful

    KPI reviews need a drumbeat. Hold recurring sessions with commissioning leads, construction, operations, and safety. The goal isn’t to admire charts; it’s to agree on fixes for this week.

    As data builds, you can tighten startup forecasts. Historical first-pass rates, typical retest loops, and average defect density all feed into more realistic duration estimates. That’s how you move from “we hope” to “we’re ready on this date.”

    When a KPI trends the wrong way — SPI falling, rework hours climbing — treat it like any other deviation: find root cause, assign corrective action, verify the fix.

    Getting Started

    You don’t need enterprise software to begin. Pick three to five KPIs tied to the pain that hurts you most right now — maybe SPI, first-pass acceptance, and rework hours. Establish a clean baseline. Track them daily or weekly. Talk about them in the open.

    Pilot on one unit or system instead of trying to “KPI the whole plant” on day one. That pilot becomes proof of value and a training ground for everyone else.

    Most important: KPIs are not for blaming people. They’re for protecting the startup date, protecting safety, and protecting margin. When field crews see that, adoption stops being a fight.

    Commissioning done right isn’t luck. Measure what matters, act on it, and you’ll forecast startup dates with confidence while starving rework before it ever lands on the critical path.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more
    Water Waste Plant

    SCADA Architecture for Multi-Site Saltwater Disposal (SWD) Operations

    Produced-water volumes keep climbing, regulations keep tightening, and running each SWD site as a standalone island creates blind spots you can feel in uptime, trucking logistics, and compliance. 

    A well-designed SCADA (Supervisory Control and Data Acquisition) system becomes the nerve center: one view, many sites, consistent decisions. It centralizes monitoring (injection rates, pressures, equipment health) and compliance evidence while allowing local control to ride through communication hiccups. 

    Security and reliability patterns for these systems are well-documented in NIST SP 800-82 (Rev. 3) and the ISA/IEC 62443 family. Use them as the backbone for design choices.

    Cross-Site Scheduling for Smarter Operations

    Centralization pays off when wells hit capacity unevenly, county rules don’t match, and trucks queue at the wrong gate. With site-to-site visibility and cross-site scheduling, you can smooth loads, redirect trucks, and tighten reporting. This is especially useful with the strict injection limits and integrity testing that regulators emphasize.

    The EPA’s UIC Class II program frames the big picture. Day-to-day limits and forms are set by state agencies, like Texas RRC, Oklahoma OCC, New Mexico OCD.

    Field Realities You Have to Design Around

    Distance and Networks

    Remote SWD sites rarely get high-speed fiber optic network connectivity. You’ll juggle cellular, licensed/unlicensed radio, and satellite, each with their own latency and availability trade-offs.

    Architect for graceful degradation: keep essential control local and push summaries to the center. 

    Regulations Vary

    Texas commonly conditions injection permits with a maximum surface injection pressure of about 0.5 psi per foot to the top of the injection interval. A practical ceiling intended to avoid fracturing confining zones. Oklahoma and New Mexico impose different pressure/testing particulars and reporting cadences. Every new field brings different regulations and that can make centralized accounting and regulatory reporting seem impossible

    A centralized SCADA environment can easily bridge all the requirements of the complex realities of a large corporate structure. Whether it’s templating reports per state so operators aren’t word-smithing spreadsheets at midnight or post-processing field data to aggregate accounting needs, a SCADA system gives businesses the agility to compete.

    Capacity Balancing

    Without system-wide visibility, one site hits its daily limit while another idles. Central dispatch guided by historian trends and real-time KPIs (injection efficiency, uptime, alarms) curbs wasted trucking miles and improves compliance headroom. 

    Manipulate water traffic based on available capacity and operational needs. Centralizing the data allows decision making based on real-time data of a system, as a whole, not in a vacuum.

    Safety and Environmental Signals

    You’re watching formation pressures, permitted rates, water quality, and leak/spill indicators continuously. Hitting limits isn’t optional, it’s the line between steady operations and citations.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    What to Monitor And Why

    Pressures and rates. They define safe operating envelopes and permit compliance. Deviations trigger operator notifications.

    Water quality. Salinity, oil/solids carryover, and treatment efficacy influence disposal formation compatibility and maintenance cycles.

    Equipment health. Use vibration/temperature/runtime to drive condition-based maintenance so a failing pump doesn’t become a shutdown.

    Data harmonization. Different pads run on mixed protocols (EtherNet/IP, Modbus, legacy RTUs), so standardizing tags/units is critical. DNP3 suits unreliable links with event reporting, while OPC UA offers secure, interoperable data modeling for modern systems.

    Cybersecurity isn’t optional. Treat all SCADA systems as critical infrastructure: zone/segment by function, route traffic through conduits, apply least privilege, and instrument for detection.

    Core Architecture for Multi-Site SWD

    Central Servers, Thoughtfully Redundant

    Keep primary and standby in separate locations; use clustering for historians/alarms so a single failure doesn’t blank your visibility. This mirrors OT guidance for resilience rather than fragile, single-box “do-everything” servers.

    Operator Interfaces that Scale

    Start with a map-level overview for status at a glance, click into facility screens for that site’s specific equipment and control, and standardize navigation so an operator can cover all of their facilities without relearning screen logic at each site.

    Rugged Field Controllers

    PLCs/RTUs must survive heat, cold, dust, and vibration. Outdoor enclosures rated NEMA 4X protect against hose-down, wind-blown dust, and corrosion.

    Hazardous areas typically call for Class I, Division 2-appropriate equipment selection and installation.

    Protocol Mix

    A SCADA environment allows you to use a mix of protocols that are widely available. It’s easy to keep Modbus for simple reads/writes, use DNP3 where spotty links benefit from event buffers and time-stamps, and OPC UA where you want secure modeling across vendors.

    For sensor nets and edge telemetry, MQTT offers a lightweight publish/subscribe pattern suited to constrained, intermittent links.

    Selecting Hardware That Actually Lasts

    Environmental Protection

    Moisture, dust, and salt attack electronics. Match enclosures to the environment; NEMA 4X is common outdoors and in washdown or corrosive atmospheres. In classified areas, ensure the whole bill of materials (enclosure, fittings, devices) meets Class I, Div 2 rules.

    Power Resilience

    Power failure happens. Correctly size your UPS ride-through, pair with automatic transfer switches and generators following NFPA 110 guidance (design, testing, maintenance). Even when not legally required, adopting NFPA 110 conventions hardens recovery from grid events.

    Modularity

    Buy controllers with I/O headroom and communication-module expansion so you’re not having to start a whole new panel for a few new wells or more storage capacity.

    Software Platform Requirements

    • Each site’s control should be logically isolated even if infrastructure is shared. Role-based access ensures pumpers see controls, managers see summaries, and contractors see only what they need. OPC UA and modern SCADA platforms support certificate-based trust and user authorization patterns that align with this.
    • Push immediate alarms and safety logic to the edge so local automation carries the load when backhaul drops, a posture reinforced in OT security guidance.
    • Secure web/HMI views let supervisors acknowledge alarms, and techs fetch manuals and trends in the field—without poking holes around segmentation boundaries.

    Multi-Site Network Design

    Topology and links. Stars (each site back to HQ) are simple; meshes offer alternate paths over radio for resilience. Mix cellular primary with licensed radio failover; keep satellite as last-resort.

    Automatic failover. Let the comms layer switch paths without operator action. Prioritize alarm transport ahead of bulk history when bandwidth shrinks.

    Historian “store-and-forward.” Local buffers hold time-series data during outages and trickle it back when the link returns. Most modern historians and MQTT pipelines support this pattern out of the box; it’s a good antidote to compliance gaps from missing samples.

    Cloud vs. hybrid. Cloud deployment adds elasticity for analytics and storage, but pure cloud control adds risk. A hybrid model keeps critical functions on-prem while leveraging cloud to scale. That split is consistent with OT security references.

    Bandwidth hygiene. Use compression, report-by-exception, deadbands, and DNP3 event reporting so you’re not paying to move noise.

    Picking The Right Protocols

    • Modbus: ubiquitous, simple, minimal overhead; limited security features.
    • DNP3: event buffers, confirmations, secure authentication, time-sync; strong choice for unreliable links and compliance-friendly audit trails.
    • OPC UA: vendor-neutral information modeling with certificates for authentication, integrity, confidentiality; ideal for northbound IT/analytics.
    • MQTT: ultra-light pub/sub model that thrives on constrained links (battery sensors, remote skids), widely used across IoT and Oil & Gas applications.

    Compliance Integration (Make Audits Boring)

    Make Reporting Automatic

    Generate required forms directly from historian tags and events, templated per state (Texas RRC, OCC, OCD), with time-stamps and signatures handled electronically. 

    You’re aligning operations with the UIC Class II program while meeting local paperwork rules.

    Environmental Monitoring

    Fold groundwater, air, and spill detection into SCADA so alarms and trends live in the same pane of glass as injection metrics.

    Performance & Analytics

    Dashboards that matter. Surface injection efficiency, capacity headroom, equipment utilization, and energy burn. Use historian trends to justify capital or redistribute load.

    Predictive maintenance. Vibration and temperature patterns flag developing failures. Runtime counters move you from time-based to condition-based PMs; less wrench time, fewer surprises.

    Scheduling optimization. Blend reservoir response trends with trucking ETAs to maximize throughput without flirting with permit limits.

    Historical insight. Seasonal swings, gradual equipment decay, and energy cost patterns turn into targeted fixes and sensible budgets.

    What Good Looks Like in Practice

    • Operators get consistent screens across all sites and can triage alarms without hopping tools.
    • Maintenance sees condition trends and recommended actions, not cryptic tag floods.
    • Management tracks compliance posture, capacity headroom, and costs on one page.
    • Regulators receive clean, time-stamped reports aligned to their template—no manual re-entry.

    If you’re starting from scratch, build a thin slice first: two sites, standardized tags, historian with store-and-forward, segmented networks, and a minimal KPI dashboard. Then replicate.

    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more
    Piping Diagram

    From P&IDs to Panels: Specifying Control Panels and Passing FAT/SAT

    If you’ve ever watched a “simple” panel job turn into three weeks of scramble, you know the truth. The way we translate P&IDs into real, physical control panels makes or breaks commissioning. 

    Get the specification right and FAT/SAT feel like a formality. Miss a few details and you buy delays, field rework, and warranty heartburn.

    Here’s a practical, standards-anchored playbook so your panels ship right, install cleanly, and start up on schedule. From reading the P&IDs to closing out SAT.

    Understanding P&IDs and What They Don’t Tell You

    P&IDs are the backbone: they capture process flow, instruments, control loops, and protection functions you’ll marshal into a panel. 

    Use recognized symbol and identification standards so the whole team speaks the same language:

    • ISA-5.1 (Instrumentation Symbols & Identification).
    • ISO 14617-6 (graphical symbols for measurement/control).
    • PIP PIC001 practice for P&ID content and format.

    Read P&IDs methodically and extract a structured panel spec:

    • I/O & signals: per loop; type (AI/AO/DI/DO), ranges, isolation, power class, and any intrinsically safe barriers.
    • Safety integrity: which functions are SIS/SIF vs. BPCS, and the SIL target that will drive architecture and proof testing under IEC 61511 / ISA-84.
    • Communications: what must speak to what. EtherNet/IP, Modbus, OPC UA, and which links are safety-related vs. information only.
    • Environment & location: enclosure rating, temperature/humidity, corrosion exposure, and whether the panel or field devices sit in a hazardous (classified) location (e.g., Class I, Division 2 under NEC/NFPA 70/OSHA).

    Reality check: P&IDs rarely spell out alarm philosophy, historian tags, user roles, or cybersecurity boundaries; yet all of these affect the panel. 

    Close those gaps early using your site alarm standard (ISA-18.2 if you have it) and your OT security baseline (IEC 62443 / NIST SP 800-82).

    Specifying the Control Panel; Removing the Mystery

    1) Electrical and Safety Fundamentals

    • Applicable codes/standards: Design to UL 508A for industrial control panels (construction, component selection, SCCR, spacing/labels) and to NFPA 70 (NEC) for installation and hazardous-area rules. If you intend to ship a UL-labeled panel, say so explicitly in the spec.
    • Power architecture: feeder details, UPS/ride-through targets, heat load and cooling method, and fault/coordination assumptions that drive breaker and SCCR selections.
    • Arc-flash/LOTO hooks: provide nameplate data and working-clearance assumptions so the safety documentation and labels align with NEC/plant practice.

    2) Environmental and Enclosure Choices

    • Specify enclosure type rating and materials (e.g., 3R/4/4X) against salt/fog, washdown, or desert heat; define heater/AC setpoints and condensate routing. In hazardous locations, align construction with Class I, Division 2 expectations (equipment suitability, wiring methods, sealing).

    3) Networking and Cybersecurity by Design

    • Call out segmented networks (controls vs. corporate), managed switches, time sync, and remote-access methods. Reference IEC 62443 and NIST SP 800-82 so vendors document zones/conduits, authentication, and logging; not bolt them on later.

    4) HMI and Operator Experience

    • Define HMI size/brightness, glove/touch needs, language packs, and alarm colors/priorities to match your alarm philosophy. Good HMI rules save hours in SAT by avoiding “Where is that valve?” moments. Tie displays to tag names and cause-and-effect tables derived from the narrative.

    5) Documentation That is Actually Testable

    • Require: instrument index and I/O list, loop sheets, electrical schematics, network drawings, panel layout, bill of materials with certifications, software functional specification / control narrative, alarm rationalization tables, and FAT/SAT procedures. Quality documentation is the contract for acceptance.

    Functional Safety: Bake It In, Don’t Patch It Later

    If the panel carries any part of a SIS, treat those functions per IEC 61511 from day one:

    • Safety Requirements Specification (SRS).
    • Independence/separation from BPCS as required, diagnostics, bypass/override design, and proof-test intervals and methods captured in the test plan. 
    • Mapping P&ID cause-and-effect to SIFs early prevents last-minute rewires and retests.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    FAT: Make the Factory Your First Commissioning

    Why FAT matters: It’s cheaper to find mismatched wiring, wrong scaling, bad alarms, or flaky comms at the vendor’s bench than at your site. IEC 62381:2024 lays out the structure and checklists for FAT, FIT, SAT, and SIT. Use that backbone to avoid “interpretation debates.”

    Plan before you build:

    • Approve test procedures and acceptance criteria up front (I/O by I/O; sequences for start/stop/upset; comms failover; load/latency checks).
    • Define roles: who witnesses, who signs, who logs deviations/non-conformances.
    • Arrange the tooling: signal simulators, calibration gear, comms analyzers, and, for complex plants, a process simulator or emulation. (If you can’t simulate it, you can’t prove it.)

    Execute methodically:

    • I/O and loop checks: polarity, ranges, scaling, engineering units, clamps/limits, bumpless transfer, and fail-safe states.
    • Comms & integration: protocol verification (addressing, byte order, time-stamps), performance under load, and third-party skids integration.
    • Alarm tests: priorities and annunciation per your philosophy; standing-alarm rules; shelving/suppression behavior.
    • SIS proof points: for SIFs, demonstrate detection, logic, final element action, and trip times against SRS targets. Record what you prove and how often you must re-prove it.
    • Document everything: Log NCRs, corrective actions, and the as-tested configuration (firmware, IPs, logic versions). This package becomes the seed for SAT.

    SAT: Prove It in the Real World; Safely

    Between FAT and SAT, control drift happens (a device swap, a quick code fix). Lock versions, track MOC, and re-run targeted FAT steps if something changes.

    Prereqs worth confirming:

    • Power quality, grounding/bonding, and panel clearances match design; hazardous-area equipment and wiring meet NEC/OSHA expectations.
    • Network services (time sync, DHCP reservations, routes) actually exist on site, not just on the vendor’s bench.
    • Instruments are installed, calibrated, and ranged per the loop sheets.

    Run SAT in a deliberate order:

    1. Dry tests first (no live product): I/O point-to-point, permissives/interlocks proved with simulated signals.
    2. Cold commissioning: energize subsystems, check sequences without process risk.
    3. Live tests: exercise start/stop/abnormal scenarios with the process, record timings and loads, then compare to FAT baselines.
    4. Performance snapshots: capture response times, loop performance, and comms throughput as operating references for maintenance.

    Closeout with an operational turnover: as-builts, calibration certs, final programs/config backups, cause-and-effect, alarm philosophy, training records, and the signed FAT/SAT dossier.

    Common Trip-Wires and How to Step Around Them

    • Protocol quirks: Modbus register maps, byte order, and undocumented vendor “extensions” cause many delays. Specify and test protocol details during FAT; bring a sniffer.
    • Legacy surprises: Old PLCs/SCADA with limited connections or slow polling collapse under new loads. Identify limits early and throttle or upgrade.
    • Spec drift: small field changes stack into big test gaps. Control with formal change management tied to document versions.
    • Environment vs. build: panels that pass in a lab can fail in heat, dust, or salt. Size HVAC, coatings, and gasketing for reality, not brochures.
    • Hazardous area assumptions: labeling or wiring that doesn’t meet Class I, Div 2 or local code will halt SAT. Verify before shipment.

    A Minimal, High-Leverage Panel Spec

    • Standards: UL 508A build and label; NEC/NFPA 70 installation/hazardous location compliance.
    • Safety: IEC 61511 lifecycle for any SIF; SRS attached; proof-test intervals defined.
    • Docs: I/O index; loop sheets; schematics; panel GA; network drawings; bill of materials with certifications; control narrative; alarm philosophy; IEC 62381-aligned FAT/SAT plan.
    • Environment: enclosure rating (NEMA 4/4X/12), thermal design, corrosion/condensation mitigation; hazardous classification notes and wiring method.
    • Cyber: IEC 62443/NIST 800-82 references; zones/conduits; remote access/MFA; logging.

    Why This Works

    You’re aligning the design and test process with widely recognized guidance:

    • ISA-5.1 / ISO 14617 for drawings and symbols.
    • IEC 61511 / ISA-84 for safety.
    • IEC 62381 for FAT/SAT choreography.
    • UL 508A and NEC for how the panel is built and installed.
    • IEC 62443 / NIST 800-82 for security.

    That common language shortens meetings, sharpens acceptance criteria, and reduces surprises.

    Takeaways You Can Apply

    • Pick one pilot system and write the control narrative and FAT together; you’ll catch 80% of ambiguities before metal is bent.
    • Publish a one-page protocol sheet (addresses, registers, time sync, failover) to every vendor before FAT.
    • Add a site-readiness checklist to the SAT plan (power quality, grounding, network services, hazardous location verification).
    • Require a config snapshot (firmware/logic versions, IP plan) at FAT exit and at SAT entry—then diff them.
    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Read more

    FEED vs. Detailed Design: How to De-Risk Your Gas Plant Build

    The Stakes and Why Front-End Choices Matter

    Gas plants are capital-intensive, multi-discipline beasts. Miss on scope or sequence and costs explode, schedules slip, and confidence fades. 

    In large capital programs broadly, reputable studies show chronic budget and schedule slippage, vast majorities of megaprojects run over. It is exactly why the front end has outsized leverage on outcomes.

    This guide clarifies when to use FEED (Front End Engineering Design) versus jumping straight to detailed design, and how that choice affects risk, cost accuracy, procurement timing, and delivery.

    FEED vs. Detailed Design: What’s the Real Difference?

    What FEED Actually Does

    Think of FEED as the bridge from concept to buildable intent. It locks the process basis and key design criteria, producing PFDs/P&IDs, plot plans, preliminary equipment specs, and the safety backbone. HAZOP and SIS/SIL planning per IEC standards, for example.

    The payoffs are tighter estimates and fewer surprises. In the AACE estimate-class framework, moving from conceptual (Class 5/4) toward Class 3 typically improves accuracy to roughly −10%/−20% to +10%/+30% depending on complexity. It is far better than the ±50% conceptual range often cited for early studies.

    On cost, FEED commonly falls in the ~2–3% of TIC range (some programs cite ~3–5% depending on depth and complexity), but that spend underwrites sharper scope, procurement strategy, and construction planning.

    Safety and operability analysis belong here. Use IEC 61882 to structure HAZOPs and IEC 61511 to frame SIS lifecycle/SIL determination.

    What Detailed Design Delivers

    Detailed design transforms FEED intent into construction-ready drawings, isometrics, data sheets, cable schedules, control narratives, and procurement packages across mechanical, electrical, I&E, civil/structural. At this point, estimate precision typically tightens again (toward Class 2/1 bands) with narrower ranges suitable for bids and Final Investment Decisions.

    The distinction in plain language: FEED defines the right plant; detailed design defines every bolt of the right plant.

    Why Front-End Rigor Pays (Risk, Cost, Schedule)

    1) Cost Accuracy You Can Defend

    FEED narrows uncertainty from conceptual swings (±50%) toward Class-3-like ranges (often in the ~±15–30% envelope for process industries), enabling credible budgets, contracting strategy, and financing.

    Independent and vendor literature report that robust FEED/front-end loading correlates with lower total installed cost and shorter execution. Some benchmarks cite material cost reductions and schedule improvements when FEED is thorough.

    2) Fewer Technical Surprises

    FEED is where you validate process simulations, unit operations, and operability/maintainability. Run HAZOP; assign SIL to SIFs; and stress-test tie-ins.

    Doing it here prevents costlier changes later and anchors mandatory safety/protection requirements for detailed design.

    3) Schedule You Can Actually Hit

    A complete FEED lets you start procurement in parallel with detailed design. Critical when long-lead packages (e.g., large compressors, major electrical gear) run 20–60 weeks or more, with some compression equipment out a year+ in tight markets. Early identification and pre-bid/vendor engagement protect the critical path.

    For LNG and large gas processing, FEED itself can take 12–18 months, but that time produces a package that de-risks EPC and informs long-lead buys.

    Looking for an EPC Company that does it all from start to finish, with in house experts?

    Gas-Plant Reality Check: The Risk Landscape

    • Technical/Process: complex separations, acid gas removal, sulfur recovery, and high-consequence safety envelopes—best handled with standards-driven HAZOP/SIS governance.
    • Commercial: inflation, supply-chain volatility, and scarce specialty resources. Sector-wide analyses still see 30–45% average budget/schedule variance on major programs without better controls/visibility.
    • Regulatory/ESG: tightening emissions and permitting expectations add steps you want planned, not discovered.
    • Operational: 20–30-year life cycles demand flexibility for feed changes and future debottlenecking.

    How FEED Protects Your Project

    Cost discipline early. Use FEED to standardize equipment, simplify process trains, and remove bespoke one-offs. Lock an AACE-aligned basis of estimate and contingency logic; socialize it with financiers and partners to avoid late-stage resets.

    Safety first, on paper. Complete HAZOP and define SIS lifecycle/SIL targets before detailed design. Treat the outputs as design requirements, not advice.

    Procurement strategy early. Identify long-lead items during FEED; pre-qualify vendors and launch RFPs on the first safe opportunity. Many MEP/electrical packages (switchgear, AHUs, large valves) now see 20–60-week windows; large compression skids may extend 12+ months.

    Parallelize smartly. With process requirements frozen and key specs set, detailed design can progress while long-lead orders and early works start—shortening your critical path.

    When to Move from FEED to Detailed Design

    Green lights typically require:

    • Technical maturity: simulations closed, FEED-level P&IDs/plot plan, HAZOP actions addressed, preliminary 3D/constructability passes done.
    • Commercial readiness: budget approved, funding plan in place, contracting model set, long-lead procurement strategy defined.
    • Permitting/ESG: material approvals on track to avoid EPC stalls.
    • Risk posture: if you must accelerate, quantify what’s “at risk” (and cap it). Sector analyses warn that under-cooked front ends are a common root cause of overrun/overrun.

    Managing Gas-Plant Risk

    1. Own a living risk register from FEED onward, covering technical/commercial/schedule/regulatory line items with owners and triggers.
    2. Favor proven tech unless the business case justifies pilot/prototype risk; if you must push tech, secure vendor guarantees and performance bonds.
    3. Plan compliance in FEED—early agency engagement, environmental baseline work, and submissions sequenced to your long-lead timeline.
    4. Build real contingency: technical alternates, schedule recovery options, and cost mitigation actions you can actually execute.

    Making the Choice

    • Use thorough FEED for high-complexity, first-of-a-kind, brownfield tie-ins, constrained sites, or tight safety envelopes.
    • Consider acceleration only when market timing justifies it and you can quantify the added risk (and carry the contingency).
    • Don’t skip FEED to “save” 2–3%—front-end investment routinely saves multiples downstream via fewer changes, cleaner procurement, and faster commissioning.
    • Match to capability: experienced owner’s teams may compress phases; others should buy rigor with experienced FEED partners.

    A Simple Decision Checklist

    • Scope clarity: Process basis frozen? Battery limits clear?
    • Safety: HAZOP complete (key actions closed), SIS/SIL targets set per IEC 61511?
    • Estimate maturity: AACE-aligned class with documented assumptions/contingency?
    • Procurement: Long-lead list finalized; RFPs/tenders staged; vendor shortlist agreed?
    • Schedule logic: FEED→detailed design overlap defined; early works identified; critical path driven by long-lead reality (not hope)?
    • Permits/ESG: filings sequenced to avoid EPC stalls?
    • Change control: frozen-line philosophy and governance in place?
    1. Adopt a narrative template mapped to ISA-106 states plus ISA-18.2 alarm hooks; pilot it on one complex unit.
    2. Publish an alarm philosophy one-pager (priorities, KPIs, standing-alarm rules) and socialize it at the console.
    3. Stand up a role-based training index tied to your OQ program (API RP 1161/PHMSA FAQs) so every trainee knows the modules to complete before CSU.
    Dan Eaves

    Dan Eaves, PE, CSE

    Dan has been a registered Professional Engineer (PE) since 2016 and holds a Certified SCADA Engineer (CSE) credential. He joined PLC Construction & Engineering (PLC) in 2015 and has led the development and management of PLC’s Engineering Services Division. With over 15 years of hands-on experience in automation and control systems — including a decade focused on upstream and mid-stream oil & gas operations — Dan brings deep technical expertise and a results-driven mindset to every project.

    PLC Construction & Engineering (PLC) is a nationally recognized EPC company and contractor providing comprehensive, end-to-end project solutions. The company’s core services include Project Engineering & Design, SCADA, Automation & Control, Commissioning, Relief Systems and Flare Studies, Field Services, Construction, and Fabrication. PLC’s integrated approach allows clients to move seamlessly from concept to completion with in-house experts managing every phase of the process. By combining engineering precision, field expertise, and construction excellence, PLC delivers efficient, high-quality results that meet the complex demands of modern industrial and energy projects.

    Read more