Low-cost fixed sensors are an emerging option to aid in the management and reduction of methane emissions at upstream oil and gas sites. They have been touted as a cost-effective continuous monitoring technology to detect, localize, and quantify fugitive emissions. However, to support emissions management, the efficacy of low-cost fixed sensors must be assessed in the context of the sites, technologies, methods, work practices, action thresholds, and outcomes that constitute a broader program to manage and reduce emissions. Here, we build on technology-focused research and testing by defining a prototypical low-cost fixed sensor program framework and considering the deployment from an operational perspective. We outline potentially large operational cost penalties and risks to industry relative to incumbent programs. Most costs are caused by (i) follow-up callouts, (ii) nontarget emissions, and (iii) maintenance requirements. These represent core areas for improvement. Results highlight a need for careful consideration in regulations, ensuring that alerts protocols are carefully codified and system performance is maintained.
Introduction
Reducing methane emissions from the upstream oil and gas (O&G) industry is one of the easiest and most effective actions that can be taken to mitigate near-term global temperature rise (Nature Editors, 2021). Because methane is the principal component of natural gas, a significant portion of emissions from the O&G supply chain can be addressed with little net cost and proven solutions exist for most common issues (Environment and Climate Change Canada [ECCC], 2018; U.S. Environmental Protection Agency [U.S. EPA], 2022). The short atmospheric lifetime of methane relative to carbon dioxide means that immediate emissions reductions have near-term subdecadal effects. These facts have led regulators globally to accelerate efforts to reduce emissions (ECCC, 2018; Colorado Department of Public Health and Environment [CDPHE], 2019; The Pennsylvania Department of Environmental Protection [PDEP], 2019; Alberta Energy Regulator [AER], 2020; European Union [EU], 2020; New Mexico Environment Department [NMED], 2020; U.S. EPA, 2022) and spurred the formation of international climate agreements and commitments targeting methane emissions reductions from the O&G sector.
The challenge of reducing methane emissions from the upstream O&G industry is multifaceted. There are hundreds of thousands of upstream sites that have very little production infrastructure but carry significant potential to emit. These sites are geographically dispersed across vast production basins adding considerable mobilization complexities. Generally, the problem can be split into issues that have known locations and known abatement solutions (e.g., pneumatics, intentional vents), and issues that have unknown locations and unknown emissions rate, referred to here as “fugitive emissions” (e.g., leaks, abnormal or unexpected vents). Finding and fixing fugitive emissions fast has been a major focus of scientific interest and technology development and is the main goal of a process known as leak detection and repair (LDAR).
Efforts to find fugitive emissions are often met with industry cost concerns (Patel, 2017; American Petroleum Institute [API], 2023). These concerns, coupled with the need to better understand emissions (United Nations Environment Programme, 2020; Wang et al., 2022), have led to rapid development of new technology to detect, measure, and locate emissions (Fox et al., 2019a). This technology is nascent and, in many cases, unproven at scale for reducing emissions, yet shows some promise (Ravikumar et al., 2019; Schwietzke et al., 2019; Sherwin et al., 2021; Erland et al., 2022; Sherwin et al., 2022; Bell et al., 2023).
In many jurisdictions, fugitive emissions reductions are normally achieved through prescribed actions (e.g., periodic optical gas imaging [OGI] surveys; e.g., AER, 2020). These actions are assumed to reduce emissions, and operators are generally considered compliant if they implement them as prescribed. To encourage innovation in LDAR, regulators are becoming increasingly flexible and trending toward allowing customized programs involving a range of technologies and actions (U.S. EPA, 2022). For example, in Alberta, Canada, the provincial regulator has allowed customized pilot programs since 2021 (AER, 2020). These pilot programs are approved if they meet modeled methane emissions reduction thresholds equivalent to standard periodic OGI surveys (Fox et al., 2019b; AER, 2020). Consequently, operators are strongly incentivized to design an LDAR program that achieves the prescribed emissions reductions at the lowest cost possible.
In this study, we take an operational perspective on deployment of these new LDAR programs in order to complement existing work focusing on technology performance (e.g., Siebenaler et al., 2016; Ravikumar et al., 2019; Sherwin et al., 2021; Sherwin et al., 2022; Bell et al., 2023). Despite rapid technological advancements in LDAR, the technology must ultimately be implemented by O&G companies in order to reduce emissions, and this implementation must be practical to be effective. O&G operations are generally not open to public inspection as O&G sites are hazardous, secured environments. Furthermore, most North American regulators require O&G companies to implement their LDAR, instead of the regulator themselves performing LDAR. This implementation stage is poorly studied yet remains critical to real-world emissions reductions. This study bridges the gap between technology, implementation, and policy.
Cost-effectiveness is one of the most important considerations for operators implementing an LDAR program (API, 2023). The cost of equipment for emissions detection, measurement, and localization is not the only cost that matters within a program. Staffing, data management and interpretation, maintenance, leak repair, and other business costs are all borne by operators in LDAR operations. Notably, inaccurate data can lead to extra costs associated with work that does not contribute to emissions reductions.
Cost-effectiveness focused primarily on the up-front cost of equipment has led to considerable discussion of a particular class of new LDAR technology known as “low-cost fixed sensors” (Siebenaler et al., 2016; Patel, 2017; Riddick et al., 2020; Riddick et al., 2022). These are small, tripod- or pole-mounted emissions measurement systems similar to weather stations that are typically mounted around O&G sites at the fence line (Figure 1). Because these systems are relatively inexpensive and developed using mass-produced, consumer-grade commodity components, they have potential to enable cost-effective continuous monitoring at scale (e.g., hundreds to thousands of O&G sites). This capacity to scale quickly is a critical advantage over higher cost scientific sensors. However, these inexpensive sensors lack the quality and accuracy of scientific-grade equipment (Peltier, 2021) and have been the subject of nongovernmental organization concern (Stockman et al., 2023). Despite this, interest in low-cost fixed sensors has been spurred by the methane detectors challenge (Siebenaler et al., 2016) and notable regulatory signals in the proposed methane rules (Government of Canada, 2022; U.S. EPA, 2022). This interest is warranted, but an analysis of LDAR program effectiveness is required within the full context of operationalization.
Recent work from Bell et al. (2023) provides well-validated performance data for industry leading low-cost fixed sensors. Here, we discuss how the general performance characteristics measured by Bell et al. (2023) affect a prototypical LDAR program (Fox et al., 2019b). We consider operationalization of these sensors, and how regulations interplay with practical deployment. Our goal is to develop a more complete understanding of the operational perspective and costs associated with programs built around low-cost fixed sensors.
Prototypical low-cost fixed sensor program description and background
We begin by defining a precise prototypical LDAR program. For clarity, an LDAR program defines the sites, methods, work practices, action thresholds, and outcomes thereof to reduce emissions (Fox et al., 2019b). Ultimately, it is the whole program that results in emissions reductions and not any one given technology or method.
LDAR programs are normally regulated in terms of required actions, not required outcomes (ECCC, 2018; CDPHE, 2019; PDEP, 2019; AER, 2020; EU, 2020; NMED, 2020; U.S. EPA, 2022). Sawyer et al. (2022) contrast different regulatory approaches in North America. To predict emissions reductions from LDAR actions in LDAR programs, a suite of modeling tools has been developed (see Fox et al., 2021a; Kemp and Ravikumar, 2021). These tools have been used by both industry proponents of custom LDAR programs and regulators to predict LDAR emissions reductions. For example, in Alberta, Canada (AER, 2020), custom, industry-designed LDAR programs have been operationalized since 2021. The U.S. EPA (2022) is proposing something similar, but with predesigned options for operators to consider.
To comply with LDAR regulations, operators must follow the prescribed (or premodeled) actions (Fox et al., 2021a; Kemp and Ravikumar, 2021). The emissions reductions are assumed to occur from the actions. Operators are incentivized to comply with the regulations but do not necessarily have incentive to go beyond the regulations. This is partially due to the inability to monetize extra emissions reductions but is also due to the O&G industry having a deep focus on cost reduction, similar to other businesses (API, 2023). Furthermore, it is difficult to directly measure the effect of more LDAR work on emissions reductions—it can be difficult to make an internal business case for extra LDAR work.
Our program has 2 emissions measurement methods and work practices. First is our prototypical low-cost fixed sensor, which provides emissions event detections, localizations, and quantifications at the equipment group scale in the form of alerts (Bell et al., 2023). These alerts trigger follow-up with OGI to identify the exact component that is emitting (Zimmerle et al., 2020). Finally, images and data from the OGI follow-up can trigger abatement (Figure 2). We do not explicitly consider the emissions reductions of the program. We assume that it is designed and implemented to achieve regulatory compliance following the most common North American LDAR regulatory model (Fox et al., 2019b; Fox et al., 2021a; Fox et al., 2021b; Kemp and Ravikumar, 2021).
Our base for comparison is periodic OGI surveys executed 4 times per year (AER, 2020; U.S. EPA, 2022). OGI surveys are a widely discussed and common method for LDAR. Zimmerle et al. (2020) and Ravikumar et al. (2018) closely examine the performance of OGI cameras to detect leaks. Ravikumar et al. (2020) discuss the real-world performance of periodic OGI-based LDAR programs with careful consideration of LDAR efficiency. The precise actions for OGI surveys are well-elaborated in regulations (e.g., AER, 2020).
Sites
We model our prototype sites after upstream O&G sites that are common across North America. We describe the sites because the characteristics of the sites influence LDAR operations. Our sites typically contain one or several wells that flow into simple separators. In the case of gas production, the separators split liquid phase hydrocarbons or water, which are stored in an onsite tank and periodically trucked out. In the case of oil production, the separators split up the gas phase hydrocarbons, which are often vented or flared (combusted) on site. The oil is collected in a large onsite tank and is also periodically trucked out. Our prototypical sites do not have mains electricity—onsite produced gas is used to control the production processes. These sites are very common across North America and present a good case study for emissions management; however, there are other sites in upstream O&G that are different than those described here.
Our upstream sites have several known classes of emissions. We describe the classes of emissions as they can influence the effectiveness of LDAR:
Leaks: unintended emissions that occur in locations that are not designed to emit. These occur due to inevitable degradation of production equipment over time and unintended repair or assembly mistakes.
Vents: intended emissions that exist by design on the site and are therefore, in most cases, understood. For example, it is normal to use gas-driven pneumatic controllers on upstream sites as the gas pressure provides an easy power source to actuate valves. Often, casing vents on a wellhead are vented. In cases where liquids production is the target, gas is often considered waste and is vented if it cannot be combusted or tied into a pipeline.
Nonroutine venting: emissions where a known vent emits more or less than the engineered rate. We treat this as a separate class of emissions event as it has a known source location, but a rate that is different than the known rate.
Combustion process emissions: emissions associated with incomplete combustion of gas on upstream sites. Gas is often combusted to (i) dispose of the gas while lowering the climate impact by converting methane to carbon dioxide (e.g., flaring or incineration) or (ii) provide power or heat for some part of production, such as compression, pumping, or another use.
Maintenance emissions: short-lived emissions events related to regular processes (e.g., liquid transfers) or maintenance activities. Emissions often occur when any equipment is being depressurized or when managed changes in process conditions occur.
Within the context of our LDAR program, we are targeting leaks and higher than design nonroutine venting. The other emissions sources on the sites are considered nontarget.
Low-cost fixed sensors and work practice
The prototypical low-cost fixed sensor used here mirrors commercially available options (Bell et al., 2023; Figure 1). Emissions sources emit methane that is advected by the wind to the sensor, which measures the methane concentration anomaly and coeval wind speed and direction. The raw data are periodically processed to map the location of emissions sources, quantify emissions rates, and eventually trigger alerts for follow-up.
Low-cost fixed sensors are commonly provided by vendors in packages that include hardware and algorithms, with some including maintenance programs (Bell et al., 2023). Similar to other LDAR technology assessments (Ravikumar et al., 2019), it is only possible to assess the performance of the full hardware and software package as the internal details are proprietary (Bell et al., 2023). From an operational perspective, an operator would choose from one of the commercially available vendors and pay for both sensors and associated software.
It is helpful to describe the basic characteristics of these sensing systems to contextualize our discussion and better understand how the different components of the system interact. The sensor hardware consists of 5 basic hardware components:
Methane concentration: a low-cost metal-oxide total hydrocarbon sensor is used to measure the methane concentration. This sensor may be coupled with a small fan to improve temporal response. These sensors have several well-understood issues and require careful data conditioning and regular maintenance to be useful (Peltier, 2021; Furuta et al., 2022). Riddick et al. (2020, 2022) provide a detailed performance analysis of commonly used sensors. Our low-cost fixed sensor uses a sensor similar to a Figaro TGS2611-E00 (see Riddick et al., 2022).
Wind: a wind speed and direction sensor, which is used to locate the emissions source and provide quantification algorithms with wind data.
Onsite computer infrastructure: a small, single-board Internet-enabled computer system with supporting sensors and modules to record measured data and communicate with offsite computer infrastructure.
Power: a power source and mounting hardware.
Offsite computer infrastructure: an offsite computer infrastructure to process raw measurements, converting them to emissions detections, localizations, and quantifications. Comparing these processed data against predefined thresholds produces alerts.
In our prototypical program, the deployment is designed by the low-cost fixed sensor vendor, such that they specify the number of fixed sensors required to meet a certain performance specification (similar to Bell et al., 2023). As the fixed sensor only provides emissions data when downwind of equipment, it is usually necessary to deploy at least 4 systems around the site. However, fixed sensor placement is quite complex to optimize (Klise et al., 2020), and in practice, low-cost fixed sensors can provide only intermittent coverage of different parts of a site that are directly upwind of a sensor with no intervening structures that could block airflow (see examples or poor deployments in Stockton et al., 2023). We assume that a low-cost fixed sensor provider would analyze and model a site, such that placement could maximize coverage of the equipment on site, but note that there are no known standards available to guide this placement (Klise et al., 2020).
Our prototypical low-cost sensors do not function without field support from a local technician. First, during installation and in any changes to the equipment on site, the locations of all possible emissions sources (equipment) are surveyed and uploaded to the offsite computer infrastructure. Second, downtime associated with broken sensors must be resolved reasonably quickly to maintain compliance—local technicians are required to repair or replace sensors. Third, all sensors across the fleet require periodic maintenance and calibration. This ranges from evaluating sensor drift on a regular basis by comparing against a reference instrument, to replacing batteries, clearing snow and dust off solar panels, and cleaning the intake filters. Maintenance is critical to maintain system performance specification (Riddick et al., 2020; Riddick et al., 2022; Bell et al., 2023).
Once collected, data are transmitted off-site and run through a suite of algorithms (similar to Riddick et al., 2022):
Quality control and error detection: raw data must be automatically checked for errors that could indicate sensor problems. Any issues flag human inspection and potentially trigger a service callout to field technicians.
Calibration and anomaly detection: once initial quality control is passed, the data are calibrated. It is well-known that low-cost metal-oxide sensors require careful and sensor-unique calibration to produce useful data (Peltier, 2021; Riddick et al., 2020; Furuta et al., 2022; Riddick et al., 2022). The raw data time series are then analyzed for anomalous concentrations, which may represent plumes of methane from equipment.
Source localization and detection: the data are then fed into an updating model of source locations that accumulates information on the probability that any given location on the site is emitting. Areas upwind of detected plumes are nudged upward in probability of being a source (and vice versa). These localization algorithms have been widely discussed and generally function (e.g., Abdelghaffar et al., 2017). Localization performance is significantly improved when masked with known equipment locations determined at install.
Source emissions quantification: once a potential known source is localized with high probability, emissions quantification can be performed. There is a diverse range of quantification algorithms available, but this prototypical sensor uses a variation of EPA OTM 33A, which is a well understood point source Gaussian model that provides screening grade emissions quantifications (Thoma and Squier, 2014; Riddick et al., 2022).
Alerting: emissions detections, localizations, and quantifications are then considered against the predefined alert thresholds to issue text message or email alerts to operators. Thresholds are predefined as part of the program to maintain a desired performance specification.
Typical alerting performance of commercially viable low-cost fixed sensors (similar to the system described here) was comprehensively field tested by Bell et al. (2023). Functional performance of tested sensors had considerable variability. For example, Solution E detected 87.7% of releases but simultaneously had a false positive rate of 79.1%. Conversely, Solution K only detected 0.3% of all releases and was largely ineffective at issuing alerts in the presence of released emissions. Although poor detection performance could be ascribed to the release rates chosen in Bell et al. (2023), the false positive rates across all sensors were high: (i) Solution A: 52.5%, (ii) Solution C: 9.3%, (iii) Solution D: 3.2%, (iv) Solution E: 79.1%, and (v) Solution F: 15.1%. Refer to Bell et al. (2023) for full context and additional details.
Interpretation and OGI follow-up
Alerts from the low-cost fixed sensor system are passed over to site operators for follow-up investigation. How the operator deals with these alerts is a major component of fixed-sensor programs that requires considerable discussion and is likely the most difficult component of the entire program to regulate and ensure reliable and trackable performance.
For our prototypical program, all alerts immediately trigger an OGI inspection to ensure an emissions reduction outcome. However, outside of our program, there are several possible scenarios: (i) ignore: the operator could simply ignore the alert, (ii) triage: the operator could issue immediate instructions for follow-up only when an alert is significantly out of the ordinary, (iii) investigate: the operator could conduct a simple investigation without specialized equipment, or (iv) investigate with OGI: upon failure to identify any emissions through simple investigation, the operator could conduct a more rigorous investigation with an OGI camera, which can isolate the component that is leaking.
OGI cameras have a practical detection limit of 3.29 (2.6−7.7) standard liters per minute methane, when used by experienced OGI camera operators (Zimmerle et al., 2020). OGI surveys thus will miss the smallest emissions sources. We are not certain if low-cost fixed sensors would issue alerts for emissions sources that are below the detection limit of OGI (Bell et al., 2023). OGI cameras have a number of well-known environmental limitations such as precipitation or temperatures below −20° C (Zimmerle et al., 2020).
In a case where operators are granted flexibility to ignore alerts, the decision that an operator makes can be both clouded and informed by knowledge of the facility. Operators have deep knowledge of the processes at their facilities and are best positioned to interpret emissions events. For example, vents that have temporal variation in their emissions rate (yet on average are emitting design rates) could be justifiably ignored; however, it takes a human with knowledge of the site and previous emissions to make that interpretation. Localization skill of the alert can meaningfully affect decisions. Localizing the emissions source to a certain piece of equipment can assist in 2 ways: (i) interpretation could be much easier and (ii) the follow-up could be targeted, reducing survey time and cost (but not mobilization cost).
OGI surveys are costly—but in our prototypical program, there is no path to abatement without component-scale identification and confirmation of the exact issue. Abatement requires detailed information, and the low-cost fixed sensor alone cannot provide sufficient information.
Abatement
Abatement in our program begins when a leak tag is created by the OGI camera crew. From this, the repair is queued, and the emissions source is eliminated or reduced (in the case of nonroutine venting). As an LDAR program, abatement within our program is limited to leaks and nonroutine venting.
However, low-cost fixed sensors can provide data that might help operators manage emissions from nontarget sources. The provision of general emissions data to help nontarget abatement is frequently cited in the discussion of low-cost fixed sensors (Riddick et al., 2022; Bell et al., 2023) but can be difficult to value directly. Emissions data, whether from low-cost fixed sensors or other systems, can help operators plan retrofits and replacement of venting components. Data can be used to help internally justify or prioritize these capital expenses. Additionally, these emissions data can help improve operational procedures on sites. Examples include changes to protocols for tank loading to minimize emissions or adding checks for closed thief hatches to operator checklists. Changes to operational procedures can have very material effects on some of the largest emissions events, such as pipeline blowdowns.
Program cost
Framework
To better understand program costs, we developed a cost comparison framework for the entire program. The framework cannot provide precise cost estimates; our goal is to unpack how costs occur and identify how they manifest. We have included a spreadsheet in the Supplemental Material that provides a sandbox for program cost modeling (see Supplemental Material Table S1 and Supplemental Material Text S1). Three broad categories of costs exist in low-cost fixed sensor programs: equipment, service and support, and follow-up inspections. These costs are controlled by the size of the program (number of sites), maintenance requirements, and the frequency of alarms triggering follow-up OGI inspections. Our basis for comparison is a periodic visit (quarterly) OGI survey program. The OGI survey program costs (COGI) are:
where n is the duration of the program in years, ns is the number of sites in the program, nOGI is the number of OGI inspections per year, and is the average OGI inspection cost per site. The OGI inspection cost includes the vehicles, staff, OGI camera, and all business-related expenses associated with the service of OGI inspections.
The low-cost fixed sensor program costs (CLCFS) are comprised of a mix of low-cost fixed sensor costs and follow-up OGI:
where is the average cost per year for the network of fixed sensors at a site and is the average annual number of follow-ups required as a result of low-cost fixed sensor alerts.
The cost per year of the fixed sensor infrastructure is composed of many different subcosts. For initial capital, the purchase of the fixed sensors is the main cost. The initial capital cost can be internalized many ways, ranging from lease to outright purchase. The low-cost fixed sensors cannot operate without ongoing operational costs, such as maintenance, support, periodic replacement, offsite data processing, and business-related costs for the low-cost fixed sensor company, which are packaged into a monthly fee. For simplicity, we consider the low-cost fixed sensor costs as a yearly cost that rolls up both capital and ongoing operational costs, although the costs could be modeled as an upfront purchase by some operators.
The number of fixed sensors per site is part of fixed sensor infrastructure costs and is prespecified as part of a program design. In general, the more sensors deployed at a site, the higher the data quality. It is up to the fixed sensor vendor to design the deployment to meet a modeled performance specification; the number of sensors on a site cannot reasonably be a free parameter without modeling performance (Klise et al., 2020; Bell et al., 2023).
Cost equality between the base OGI program and low-cost fixed sensor program occurs in a situation where Equations 1 and 2 are combined:
and simplified to
For the low-cost fixed sensor to be cost competitive, the number of follow-up surveys in the low-cost fixed sensor program must be less than the regulatory OGI program. This is because the low-cost fixed sensor program carries ongoing operational costs () to keep the fixed sensors operating. There is nuance here as it is possible that follow-up surveys could be mixed with simple operator follow-up inspections (mobilization costs, but no specialized OGI survey fees), but in our specific program, the OGI inspection must be involved to trigger abatement and meet regulatory imposed emissions reductions, so costs are effectively equivalent.
OGI surveys are typically quoted on a per-site basis, but the cost is a time-based cost that has 2 major components: mobilization and survey costs. Mobilization costs for OGI surveys are lower on a per-site basis if sites are surveyed in a focused campaign (as is common when every site requires a survey as part of the regulatory OGI program). Mobilization costs for OGI surveys are higher when ad hoc travel is required to visit a specific site and candidate sites cannot be visited in some logical order to minimize drive time. Survey costs are higher when the entire site must be surveyed and lower when only a portion of the site requires a survey. While the precise calculations of the mobilization and survey costs for regulatory OGI surveys are site specific, ad hoc callouts required from low-cost fixed sensor alerts may have higher mobilization costs. However, the localization skill of low-cost fixed sensors could reduce survey costs as only a fraction of a site may require a survey. While these costs are site-specific, there is little evidence to suggest that on average, OGI surveys initiated as follow-up from fixed sensor alerts would be significantly less expensive than the standard OGI surveys.
In some situations, LDAR can create revenue for O&G companies as leaked gas represents lost product that could otherwise be sold. This revenue may be able to subsidize LDAR operations above and beyond regulatory requirements or help justify a focus on LDAR programs that explicitly monitor for leaks with large emissions rates. These situations require case-by-case analysis as not all leaked gas is salable or particularly valuable. Furthermore, many upstream sites are not connected to gas pipelines and any recovered gas is not possible to move offsite.
Discussion scenarios
To examine how this framework for costing low-cost fixed sensor programs could manifest with real numbers, we examine some prototypical scenarios (Table 1). Our costs are estimates and could vary for any real deployment; specific cost scenarios can be modeled in the Supplemental Material (see Supplemental Material Table S1 and Supplemental Material Text S1).
Symbol . | Description and Notes . | Estimated Value . |
---|---|---|
n | Duration of program (years) | 1 |
nOGI | Number of regulatory OGI inspections per year | 4 |
ns | Number of sites in program | 100 |
Average per-site annual follow-ups triggered by fixed sensor system | Scenario 1: 4 Scenario 2: 10 | |
Average OGI inspection cost per site (US$) | $350 | |
Averaged annual per-site cost of fixed sensor system (US$) | $2,450 |
Symbol . | Description and Notes . | Estimated Value . |
---|---|---|
n | Duration of program (years) | 1 |
nOGI | Number of regulatory OGI inspections per year | 4 |
ns | Number of sites in program | 100 |
Average per-site annual follow-ups triggered by fixed sensor system | Scenario 1: 4 Scenario 2: 10 | |
Average OGI inspection cost per site (US$) | $350 | |
Averaged annual per-site cost of fixed sensor system (US$) | $2,450 |
Values are estimates used with Equations 1 and 2 to provide scenario estimates. For customized scenarios, please use the supplied spreadsheet in Supplementary Material Table S1.
Our basic cost estimations are based on a 1-year program with 100 sites and are compared against an incumbent benchmark of quarterly OGI surveys. We use an OGI survey cost estimate () of US$350/site. This is consistent with the midpoint of estimates in modeling studies (Fox et al., 2021a; Fox et al., 2021b). The fixed sensor cost () is based on a requirement for 4 fixed sensors per site. For each fixed sensor, there are the following costs: (i) hardware capital costs and depreciation (US$500/year and US$2,000/year for a total site), which includes periodic replacement (Peltier, 2021), (ii) 1 service visit per year at US$350 to perform either installation or perform yearly maintenance (assumed to be similar cost to an OGI survey), (iii) an offsite compute cost of US$100/year per site (Peltier, 2021). Together, the yearly fixed sensor cost () is US$2,450 per site—this is likely an underestimate and implies that the low-cost fixed sensor vendor is operating at sufficient scale.
Reasonable estimates for the number of follow-ups triggered by the low-cost fixed sensor will vary based on the skill of the low-cost fixed sensor and the emissions profile of the sites. With a consensus-developed controlled release protocol, Bell et al. (2023) tested 6 fixed sensors (several of which are similar to our prototypical program, solutions A, C–F, and K). The strongest performer (solution E) issued 2,382 detection reports in 104 days, subject to 567 releases—79% of alerts were false positives. The only low-cost fixed sensor tested by Bell et al. (2023) that would issue a potentially cost-effective number of alerts was solution K, which only issued 2 alerts over a 200-day deployment. However, solution K only detected 0.3% of releases and likely has a very high detection limit, much higher than OGI.
The emissions profile of target sites can have a serious effect on program cost, particularly when a program mandates follow-up from all alerts. Emissions, such as vents, combustion emissions, and maintenance emissions, will likely trigger alerts, which will require follow-up inspection. Even in the situation where the low-cost fixed sensor skill is perfect (a situation not supported by results from Bell et al., 2023), there will be follow-up inspections that do not find abatable emissions—a major issue for cost efficiency. It is possible that a quantification-based alert trigger could help mitigate these issues, but results from Bell et al. indicate that relying on quantification estimates is likely premature. For our discussion scenarios, we use 2 very optimistic situations: Scenario 1 uses 4 follow-up visits and Scenario 2 uses 10 follow-up visits per site.
Using Equations 1 and 2, the costs in US$ are:
Cost parity (Equation 4) cannot be achieved in this example as the annual per-site cost of the fixed sensor system is US$2,450, exceeding the cost of regulatory OGI at US$1,400 per year. Equation 4 also straightforwardly bounds the maximum yearly cost that a fixed sensor network could charge () to cover the fixed sensor equipment and service fees as the cost of a regulatory program (Equation 1). In our example, cannot exceed US$1,400 per site per year and be less expensive than regulatory LDAR. From another perspective, Equation 4 quantifies the cost of extra data that a low-cost fixed sensor may provide for an operator; the operator can consider the value of that data for strategic emissions management (above meeting regulatory compliance) with context from test performance (e.g., Bell et al., 2023). A spreadsheet is provided for readers to explore their own cost scenarios (see Supplementary Material Table S1 and Supplementary Material Text S1).
Discussion
Regulatory efficacy
Our prototypical low-cost fixed sensor program is likely close to the basic model that is being actively considered by regulators. The U.S. EPA’s (2022) new proposed regulations would allow duty holders to use continuous monitors as an alternative to quarterly OGI surveys for methane emissions management. ECCC is also considering allowing continuous monitors as part of forthcoming regulations; however, additional details on the potential approach have not been released (Government of Canada, 2022). The EPA’s proposed regulations, if passed, would require continuous monitors to perform site-level emissions quantification ≥ 1× every 12 h to benchmark against thresholds that trigger operator action (U.S. EPA, 2022). Regulators are thus continuing to work with a model of regulating the LDAR actions that when implemented are assumed to reduce emissions (Sawyer et al., 2022).
The first regulatory challenge relates to alert interpretation. In our program, all alerts proceed to OGI follow-up. The process of alert interpretation and OGI follow-up is likely subject to considerable latitude in real-world execution. From an operational perspective, follow-up is expensive, and in the case that follow-up is unsuccessful at identifying abatable emissions (alerts are from vents or are false positives), the cost can be seen as unproductive. This creates an issue where operators who have dealt with unproductive OGI follow-up surveys are keen to avoid the issue again and may reach for explanations that allow the alert to be ignored. This type of behavior is well-known in other fields involving automation, where high false alarm rates degrade trust and performance (Yamada and Kuchar, 2006; Wickens et al., 2009). Furthermore, LDAR is often performed in small internal business units that can be dissociated from central company management.
Practically, alerts could be justifiably ignored if the source is interpreted as a known vent, emissions from incomplete combustion, maintenance emissions, or some intermittent process event that cannot justify further follow-up. Improved localization skill and possibly quantification could help improve the interpretability of alerts, but the interpretation cannot be easily automated as it requires considerable knowledge of the actual sites and process conditions (e.g., maintenance events). Operators are best positioned to make these interpretations, and if incentives were balanced, providing operators this latitude could be a reasonable policy option to increase the cost-effectiveness of LDAR programs involving low-cost fixed sensors.
Regulators very likely face a situation where there is strong operator incentive to ignore alerts from fixed sensors. This is a concerning issue as it suggests the emissions management program that exists on paper is different from the program that is executed. Regulators would likely meet this challenge by codifying operations in a prescriptive manner, as we did for our prototypical program. For example, they would require all alerts to be investigated with a certain method in a prescriptive manner or require some auditable explanation of why the alert was ignored.
A regulatory situation where all alerts must trigger follow-up creates pressure on low-cost fixed sensor companies to improve the accuracy of their alerts. This is productive, as preliminary results from Bell et al. (2023) suggest that there is considerable room for improvement. It also may lead to more considered deployments as some sites with closely spaced equipment or known persistent venting may be unsuitable for low-cost fixed sensor programs (Klise et al., 2020).
To ensure program performance, regulators also need to manage the risk of low-cost fixed sensor nonperformance. Core to this is maintenance of the gas sensing element. Results from the research community underscore serious issues with many low-cost sensors, suggesting considerable maintenance requirements to maintain performance standards (Peltier, 2021). This pressure may also push low-cost fixed sensor vendors to move toward better sensing technology. However, a sensor that does not work will not issue alerts, which may be a desirable state in some company cultures given the incentives against incurring follow-up costs.
Additionally, blind testing of performance, and attaching those results to certain deployment standards, is required to ensure reliable maintenance of performance standards. For example, the number of sensors per site is an important qualifier of performance because it is expected that the quality of results is correlated with the number of sensors on site (Bell et al., 2023). To ensure performance, regulators need to carefully validate that the exact configurations used during testing (as a qualifier of results) are extended to deployments. There are existing tools to optimize fixed sensor placement and link the configuration of fixed sensors to the performance standards (see Klise et al., 2020).
Program cost
Cost is an important concern to operationalization of LDAR programs by O&G companies (who will ultimately reduce O&G fugitive emissions). However, we also recognize that low-cost fixed sensors provide additional data that periodic OGI surveys cannot, which creates additional value for the program even if costs are greater than periodic OGI. The value of this data to operators can be compared against the cost premium calculated by Equation 4.
While the initial deployment costs of low-cost fixed sensors are attractive, this discussion reveals that the total program cost could be much larger than expected and there are important cost linkages between different components of a program. When compared against a regulatory periodic OGI program, there is relatively little room for low-cost fixed sensor costs. Low-cost fixed sensor vendors likely face considerable pressure to reduce the deployment cost while simultaneously increasing the skill of alerts.
Localization skill could help reduce costs in 2 ways: (i) reducing costs of follow-up OGI as only a fraction of a site needs to be inspected (the on-site survey time only, not mobilization costs) and (ii) allowing operators to triage known vents without triggering a follow-up survey (if allowed in regulations). While both may reduce total program costs, it is not clear by how much. Mobilization costs of ad hoc follow-up surveys could dominate follow-up OGI costs and reduction in survey times could be negligible. While we use the US$350/site visit value for both follow-up and regulatory OGI inspections, it is possible that ad hoc callouts could be much more expensive due to inefficient mobilization. Furthermore, there are important operational management issues (e.g., availability of OGI cameras and operators) associated with managing capacity for timely ad hoc surveys; regulations with fast follow-up requirements could be much more costly than discussed.
Some site configurations could be more costly than others. Vents, nonroutine venting, and combustion emissions should all trigger alerts. Nontarget alerts could be costly. If there is no capacity to selectively ignore alerts from vents in regulations, sites with vents may be very expensive options for low-cost fixed sensor programs. If vent alerts are ignored or “blacked out,” in a spatial context, this creates another issue where the equipment containing vents must be surveyed for leaks with some other method such as periodic OGI as it can be difficult to confidently say, there is no leak next to the vent. Equipment-scale quantification may help unravel the emissions signature of a piece of venting equipment with a leak—but this may be beyond the present capabilities of low-cost fixed sensors (Bell et al., 2023). Despite these limitations, low-cost fixed sensors may help with extreme nonroutine vents or large magnitude leaks, but interpretation demands very careful analysis of both process conditions and alert accuracy metrics by an operator who understands the specifics of the site in question.
Our analysis does not consider cost of abatement. It is possible that a low-cost fixed sensor program could produce most emissions reductions through fast identification of large emitters in situations with highly skewed emissions distributions. Abatement costs should be lower if fewer repairs are conducted of large sources, instead of repairing many sources following each periodic OGI survey—while producing equivalent emissions reductions. This remains difficult to calculate and will require granular data from operators to help resolve.
Program design could incorporate other types of technologies to help address issues with our low-cost fixed sensor program. The core need is a technology that can accurately alert in the presence of fugitive emissions only. In the situation where fugitive emissions are colocated with known vents or combustion emissions—there is a need for very good spatial resolution. Satellites and aircraft typically lack sufficient spatial resolution (Fox et al., 2019a). A possible addition to a low-cost fixed sensing program could be imager type onsite fixed sensors (see Bell et al., 2023).
Technical performance discussion
This discussion suggests that low-cost fixed sensor technical performance is imperative and has considerable leverage. Even small inaccuracies carry significant cost, trust, and regulatory penalties. It is worthwhile to consider the technical barriers to improved performance and discuss whether improved performance is technically feasible within use case constraints.
First, low-cost fixed sensor hardware demands formalized, transparent, and effective quality management. Accurate alerts require quality measurements, which requires care in installation and a maintenance schedule. For example, an issue where an anemometer is bent out of alignment could read incorrect wind directions and mislocate an emissions source or provide false confidence in a detection. The error is sourced at the instrument. Maintaining a performance specification for weather station type instruments requires care and is technically feasible—but it may require more than 1 technician visit per year as assumed in our scenarios.
Second, low-cost fixed sensor algorithms require considerable reflection. Most low-cost fixed sensor algorithms are proprietary but are unlikely to vary considerably from our prototypical description. Localization performance has reasonable paths to improvements in total program cost effectiveness, and in the case where accuracy was sufficiently validated, could provide a case to regulators that localization could be used to streamline follow-up and make defensible interpretations about known vents. However, emissions quantification has proven to be a considerable challenge (Riddick et al., 2022; Bell et al., 2023); it is unclear whether sufficiently large improvements to quantification are likely to occur in the near future.
Third, it is not clear that metal-oxide sensors are fit for this purpose (Peltier, 2021). The incredible program costs associated with follow-up visits suggest that utilizing an extremely low-cost commodity sensor in a central position with such considerable leverage carries significant risk. There is abundant justification for more expensive sensors to mitigate cost escalation risk, simplify the anomaly filtering algorithms (e.g., Riddick et al., 2022), and improve performance. However, higher cost sensors may have scaling issues—but this is yet to be resolved.
Closely related to these issues is the slow pace to market reward for technical improvements. After development, technical developments such as new algorithms must be tested to show regulators and clients the performance gains. Unfortunately, the only feasible method to demonstrate performance involves controlled release testing experiments (Bell et al., 2023), leading to a lengthy develop, seal, deploy workflow that inherently slows technical improvements.
Conclusions
We have discussed a prototypical low-cost fixed sensor emissions management program for the upstream O&G industry. Significant costs associated with low-cost fixed sensor programs, mostly in the form of follow-up costs, are expected. Although the initial cost of these sensors is low, and as a result attractive to operators, total program costs could be significantly more expensive than periodic regulatory OGI surveys. This noted, operators may see sufficient additional value in the low-cost fixed sensors to justify the cost premium.
Our analysis may not be applicable at all sites. In particular, sites that have no nontarget emissions may be more amenable to low-cost fixed sensor programs, particularly if the relatively large false positive rates can be reduced in the future (Bell et al., 2023). Additionally, the incumbent technology of periodic OGI surveys has well-known limitations with detection limits, coverage, and abatement effectiveness (Ravikumar et al., 2020; Zimmerle et al., 2020).
Regulators face a considerable challenge addressing flexibility in alert interpretation and follow-up. Flexibility to selectively ignore alerts (particularly from vents) is likely to be the only way that such a program could be cost competitive. However, this flexibility may in some situations create the opportunity to neglect target emissions.
While fixing the issue of ineffective alerts in low-cost fixed sensors is very likely possible through better sensors, improved quality management, and better algorithms, the need for root cause interpretation and the widespread presence of known emissions sources indicate that there could be a ceiling for program cost competitiveness.
In summary, this discussion suggests that the cost and operational penalties associated with inefficient and unnecessary follow-up could be a definable hallmark of low-cost fixed sensor programs, and considerable work is required across all aspects of sensor performance, program design, and regulations is likely required to achieve cost-competitive performance in LDAR. We recommend considerable caution with low-cost fixed sensor LDAR programs at present, but note that results may improve in the future.
Data accessibility statement
All data are available either in main manuscript or supplemental materials.
Supplemental files
The supplemental files for this article can be found as follows:
Supplemental material Table S1. User editable low-cost fixed sensor budget.
A user-editable budget for exploring different cost scenarios using Equations 1–4. The budget also includes a more nuanced cost breakdown than discussed in the main paper to aid more realistic cost modeling. All costs estimates are approximate and do not represent any specific vendor or sensor.
Supplemental material Text S1. Description of user editable budget.
Enhanced description of the mechanics of Supplemental material Table S1.
Funding
The authors have no funding to report.
Competing interests
The authors have no competing interests.
Author contributions
Contributed to conception and design: TEB, CHH, TG, CV, MG.
Drafted and/or revised the article: TEB, CHH, TG, CV, MG.
Approved the submitted version for publication: TEB, CHH, TG, CV, MG.
References
How to cite this article: Barchyn, TE, Hugenholtz, CH, Gough, T, Vollrath, C, Gao, M. 2023. Low-cost fixed sensor deployments for leak detection in North American upstream oil and gas: Operational analysis and discussion of a prototypical program. Elementa: Science of the Anthropocene 11(1). DOI: https://doi.org/10.1525/elementa.2023.00045
Domain Editor-in-Chief: Detlev Helmig, Boulder AIR LLC, Boulder, CO, USA
Associate Editor: Gunnar W. Schade, Texas A&M University, College Station, TX, USA
Knowledge Domain: Atmospheric Science