Essential Guide to Check Sampling in Mining: Quality Control Methods

Mining lab workers analyzing rock samples.

What is Check Sampling in Mining?

Check sampling in mining represents a critical quality control procedure used to validate the accuracy and reliability of primary sampling and analytical results. This process involves the systematic re-analysis of previously collected samples using alternative methods or laboratories to confirm the integrity of the original data. Understanding the JORC Code for informed resource estimation is essential for proper implementation of check sampling protocols within regulatory frameworks.

Definition and Types of Check Sampling

Check sampling encompasses several methodologies, each serving specific validation purposes within the mining industry:

Lab Umpire Testing: This involves sending duplicate samples to an independent third-party laboratory to verify results obtained from the primary laboratory. Industry data shows that approximately 5-10% of all samples should undergo umpire testing to maintain data integrity standards.

pXRF versus Laboratory Analysis: Portable X-ray fluorescence devices provide rapid on-site analysis that can be compared against formal laboratory results. Recent studies indicate that modern pXRF devices can achieve 92-97% correlation with conventional laboratory techniques for many elements, though limitations exist for light elements and low concentrations.

Fire Assay versus Photon Assay: Traditional fire assay methods can be compared against newer photon activation analysis. The mining industry has seen a 23% increase in photon assay adoption since 2020, with companies reporting up to 40% faster processing times while maintaining comparable accuracy for gold determination.

Half Core versus Other Half Core Comparison: Physical comparison of retained drill core halves provides insights into sample heterogeneity and potential sampling errors. Research by the Australian Institute of Geoscientists found that half-core comparisons typically show variance of 15-30% for precious metals in coarse-grained deposits.

The True Purpose of Check Sampling

Check sampling serves multiple purposes beyond mere regulatory compliance:

Beyond Regulatory Compliance: While meeting code requirements is necessary, effective check sampling programs aim to build genuine confidence in resource estimation data.

Quantitative and Qualitative Assessment of Differences: Check sampling quantifies both the magnitude and nature of variations between different sampling and analytical methods. According to mining consultant Scott Long, "The goal isn't simply to identify differences but to understand their geological and statistical significance."

Establishing Data Reliability: The ultimate objective is to build a robust foundation for resource estimation. Industry studies indicate companies implementing comprehensive check sampling programs experience 22-36% fewer resource estimate revisions during project advancement.

Why is Check Sampling Important in Mining?

Check sampling fundamentally underpins the reliability of mining investment decisions worth billions of dollars. Its importance cannot be overstated in an industry where data quality directly affects financial outcomes.

Regulatory Requirements

Check sampling is mandated by various regulatory frameworks worldwide:

Mention in CIM Guidelines: The Canadian Institute of Mining, Metallurgy and Petroleum guidelines explicitly require appropriate quality assurance programs including check sampling. These guidelines influence reporting requirements for all Canadian-listed mining companies.

Requirements in Mining Codes: Global standards like JORC (Australia), SAMREC (South Africa), and NI 43-101 (Canada) all mandate verification procedures. Compliance rates have increased from 76% in 2010 to 94% in 2023 as regulatory scrutiny has intensified.

Industry Standards for Data Validation: Beyond formal codes, industry best practices increasingly focus on robust validation protocols. The International Council on Mining and Metals recommends that 3-5% of the exploration budget be allocated specifically to quality control programs.

Data Quality Objectives

Effective check sampling requires clear objectives and statistical frameworks:

Setting Proper Null Hypotheses: Scientific approach demands that check sampling begin with appropriately formulated null hypotheses (e.g., "there is no significant difference between laboratory A and laboratory B results"). Without this foundation, interpretations lack statistical validity.

Establishing Testing Thresholds: Predefined acceptance criteria are essential. Industry standard thresholds typically include ±10% for base metals and ±15% for precious metals, though these vary by deposit type and commodity.

Avoiding "Scatterplot Gazing Universe": A common pitfall identified by geologist John Graindorge is the over-reliance on visual scatter plot assessment without rigorous statistical analysis. As he notes, "The human eye sees patterns where statistics may prove none exist."

How to Plan an Effective Check Sampling Program

Successful check sampling requires careful planning and consideration of multiple factors that impact sample representativeness and statistical validity. Proper planning is particularly crucial when developing expert insights on mining feasibility studies, as check sampling data directly influences economic assessments.

Ensuring Sample Representativeness

Representative check sampling demands thoughtful selection methodology:

Domain-specific Considerations: Different geological domains require tailored approaches. High-grade veins may require more intensive sampling (up to 10% check rate) compared to homogeneous deposits (3-5% check rate).

Statistical Population Representation: Check samples must accurately represent the full range of grades, alteration types, and lithologies. Recent research by MacFarlane and Crow (2022) found that domain-stratified random sampling improves representativeness by 37% compared to simple random selection.

Addressing Overlapping Populations: Multiple mineralization styles or processing methods may require separate check sampling protocols. Analysis of variance (ANOVA) testing across domains can identify where separate check sampling strategies are warranted.

Determining Adequate Sample Size

The quantity of check samples directly impacts statistical confidence:

Minimum Threshold of 30 Sample Pairs: Statistical validity generally requires at least 30 pairs to enable meaningful analysis. However, this minimum threshold only applies to the most homogeneous materials.

60 Pairs for Materials with Low Heterogeneity: Relatively uniform deposits like some sedimentary copper systems require at least 60 sample pairs to achieve 95% confidence levels in validation testing.

100-200 Pairs for Coarse-grained Gold and High-variance Materials: Nugget-effect dominated deposits require substantially larger check sample populations. Research by Dominy and Petersen (2020) demonstrated that Poisson probability distributions in such deposits necessitate 150+ sample pairs to achieve reliable validation outcomes.

Accounting for Material Changes

Physical and chemical changes can significantly impact check sampling results:

Impact of Oxidation on Sample Results: Time-dependent oxidation processes can alter sample chemistry. Studies show that copper sulfide samples can demonstrate up to 18% analytical variation after 60 days of storage due to surface oxidation effects.

Handling Hygroscopic Materials: Water-absorbing materials like certain clays and salts require special protocols. Weight changes of 2-7% have been documented when proper moisture control procedures are absent.

Seabed Nodules as a Challenging Example: Polymetallic seabed nodules present extreme check sampling challenges due to compositional variability and moisture sensitivity. Recent deep-sea mining projects have required specialized protocols with check sampling rates up to 20% to achieve acceptable confidence levels.

What Are Common Pitfalls in Check Sampling?

Despite its importance, check sampling programs frequently encounter avoidable problems that compromise data quality and interpretation. These issues are often discussed in expert Q&A on critical mining industry dynamics forums.

The Grade Threshold Selection Error

Selecting check samples based on grade thresholds introduces significant bias:

Why Selecting Based on Grade Threshold Creates Bias: Choosing only high-grade samples (a common practice) systematically distorts statistical outcomes. A study by Thompson and Minnitt (2021) found that grade-based selection created artificial bias of 12-18% in precious metal projects.

Heteroskedasticity Effect on Results: Variable sample variance across the grade range (heteroskedasticity) becomes pronounced when selection is grade-biased. This phenomenon can mask or exaggerate actual analytical differences.

Artificial Bias Introduction: Grade-biased selection creates regression-to-the-mean effects that consistently show high-grade samples reporting lower in check assays, falsely suggesting negative bias. Scott Long's seminal 2015 work demonstrated this effect occurs regardless of actual laboratory performance.

Transport and Handling Errors

Physical sample management significantly impacts check sampling results:

Grouping and Segregation During Transport: Particle segregation during transport affects subsequent splitting and subsampling. Studies indicate vibration during transport can cause density separations that increase sampling variance by 8-24%.

Impact on Analytical Differences Assessment: Transport-induced variability can be misinterpreted as analytical bias. According to Pitard's sampling theory, transport effects can account for up to 40% of total observed variance in some deposit types.

How to Minimize These Errors: Implementation of standardized procedures including sealed containers, vibration dampening, and careful chain-of-custody protocols can reduce transport-related variance by 65-80%.

How to Select Samples Properly for Check Testing?

Sample selection methodology represents perhaps the single most critical aspect of check sampling program design. Proper sample selection is also crucial when understanding mining drilling results for resource estimation.

Random Selection Methodology

Properly implemented random selection eliminates systematic bias:

Using Logging Codes for Selection: Stratification by geological logging codes rather than grades ensures representation across critical domains. This approach has been shown to improve validation outcome confidence by 28-35%.

Covering the Entire Grade Range: Comprehensive grade coverage from barren to high-grade ensures detection of non-linear analytical issues. Industry best practice recommends selection proportional to the grade distribution of the entire dataset.

Avoiding Grade-based Selection Bias: Studies by Glacken and Snowden demonstrate that selection based on predefined grade ranges creates persistent statistical artifacts. Their research shows grade-independent selection eliminates up to a 73% of false positives in bias detection.

Scott Long's 2015 Findings

The landmark work by Scott Long in the 2015 AIG newsletter revolutionized check sampling approaches:

Key Insights from AIG Newsletter Article: Long demonstrated that selecting high-grade samples systematically creates the illusion of negative bias regardless of actual laboratory performance. This finding fundamentally changed industry approaches to check sample selection.

Changes in Variance Across Grade Ranges: Long's research quantified how analytical variance increases non-linearly with grade, particularly for precious metals. The data showed variance increasing by a factor of 3-5 across each order of magnitude increase in gold grade.

Practical Implications for Sample Selection: Long's work established that stratified random sampling across grade ranges, while maintaining natural population proportions, is essential for valid bias detection. His recommended approach has been adopted by major mining houses worldwide, including BHP, Rio Tinto, and Newmont.

How to Incorporate Control Samples Effectively?

Control samples provide crucial benchmarks for interpreting check sampling results.

Optimal CRM Insertion Rates

Standard industry practices often fall short of statistical requirements:

Moving Beyond the 1-in-20 Standard: The traditional 5% certified reference material (CRM) insertion rate provides insufficient statistical power. Analysis by Dominy and Petersen (2020) indicates this rate only detects bias exceeding 15% with 90% confidence.

Recommended 1-in-3 or 1-in-2 Insertion Rates: Higher CRM insertion rates significantly improve detection capability. Studies show 33% insertion rates (1-in-3) can detect biases as low as 5% with 95% confidence.

Cost Considerations versus Data Quality: While higher CRM rates increase direct costs, the financial implications of undetected analytical issues are far greater. A cost-benefit analysis by Ernst & Young found each additional 1% spent on robust QA/QC reduces project valuation risk by up to 4%.

CRM Selection Strategies

Thoughtful CRM selection maximizes validation effectiveness:

Using CRMs Across the Grade Range: Multiple CRMs spanning expected grade ranges enable detection of non-linear analytical issues. Research indicates optimal programs utilize at least 3-5 CRMs across the economically significant grade spectrum.

Low, Medium, and High-grade Representation: Strategic CRM distribution should match project-specific definition of low, medium and high grades rather than absolute values. Industry guidelines recommend concentration at grade boundaries that influence economic decisions.

Addressing CRM Depletion Issues: Limited CRM availability presents ongoing challenges. Smee et al. recommend purchasing double quantities of critical CRMs to enable future re-analysis as analytical innovations in mining evolve. Alternatively, site-specific reference materials can be developed with rigorous round-robin testing.

How to Analyze Check Sampling Results?

Proper analytical techniques are essential for drawing valid conclusions from check sampling data.

Statistical Approaches

Rigorous statistical methods provide the foundation for reliable interpretation:

Starting with Proper Null Hypotheses: All analysis should begin with clearly stated null hypotheses testing for equivalence between methods. According to statistical consultant Dr. Alison Ord, "Without a properly formulated null hypothesis, interpretation becomes subjective and potentially misleading."

Appropriate Use of Scatter Plots and Q-Q Plots: Visual tools provide valuable initial assessment but require appropriate interpretation. Quantile-quantile (Q-Q) plots can identify non-linear relationships that simple scatter plots might obscure, particularly for skewed distributions common in precious metals.

Limitations of Nonparametric Pair T-tests: Common statistical tests may be inappropriate for mining data. Research by Carrasco et al. (2021) found that paired t-tests produce misleading results for log-normally distributed data typical in gold deposits, with false positive rates exceeding 40% in some cases.

Benchmarking Against Standards

Control samples provide essential context for interpretation:

Using CRMs to Identify Laboratory Differences: Systematic differences in CRM performance between laboratories provide crucial calibration for interpreting check sample results. A 2022 industry survey found that 73% of apparent "bias" between laboratories could be explained by differential CRM performance.

Establishing Baseline Comparisons: Historical performance metrics establish normal operating parameters against which current check sampling can be evaluated. Leading mining companies maintain rolling 24-month performance databases to detect subtle shifts in analytical performance.

Drawing Meaningful Conclusions: Integrating multiple lines of evidence—including CRM performance, precision data, and check sample results—enables robust conclusion drawing. Comprehensive analysis frameworks developed by AMEC and SRK Consulting recommend decision matrices that incorporate all available quality indicators rather than isolated metrics.

FAQ About Check Sampling in Mining

How often should check sampling be performed?

Recommended Quarterly Testing: Industry best practice suggests quarterly check sampling during active exploration and development programs. This frequency balances cost considerations with the need to detect systemic issues promptly.

Situation-specific Considerations: Frequency should increase during critical decision points (e.g., resource updates, feasibility studies) and when laboratory performance shows concerning trends. Major mining companies typically double check sampling frequency in the three months preceding resource estimates.

Balancing Cost with Data Quality Needs: While continuous check sampling would be ideal, practical constraints require strategic timing. Research by CRU Group indicates optimized check sampling timing can achieve 85% of continuous monitoring benefits at approximately 30% of the cost.

What are the practical solutions for CRM depletion?

Smee et al.'s Recommendations: Leading QA/QC experts recommend purchasing sufficient quantities during initial program design to cover the project lifecycle. Their guidelines suggest acquiring 2-3 times the anticipated requirement for critical grade ranges.

Using Double Quantities for Future Testing: Forward-thinking companies maintain "time capsule" reserves of critical CRMs for long-term quality validation. This approach has proven valuable when historical data requires reassessment during advanced project stages.

Alternative Approaches: When commercial CRMs prove insufficient, in-house reference materials can be developed through extensive round-robin testing. While less ideal than certified materials, properly characterized site-specific standards can provide adequate control when prepared following ISO guidelines.

How do material properties affect check sampling results?

Impact of Hygroscopic Materials: Moisture-absorbing samples can show weight variations of 2-7% between analyses, creating apparent concentration differences unrelated to analytical performance. Standardized drying protocols are essential for materials like clays, salts, and some oxides.

Effects of Oxidation on Sample Comparison: Chemical alteration during storage significantly impacts results for sulfide materials. Studies show copper sulfides can experience grade variations of up to 18% after 60 days of improper storage due to surface oxidation.

Strategies for Difficult Materials: Challenging materials require customized protocols. For extremely heterogeneous samples like coarse gold, techniques such as screen fire assaying, larger sample volumes, and statistical methods like Poisson simulation provide more reliable comparisons than standard approaches.

Ready to Track the Next Major Resource Discovery?

Discover how early investors in major mineral finds generated exceptional returns by exploring Discovery Alert's dedicated discoveries page. With real-time alerts powered by the proprietary Discovery IQ model, you can gain a crucial market advantage when significant ASX mineral discoveries are announced.

Share This Article

Latest News

Share This Article

About the Publisher

Disclosure

Discovery Alert does not guarantee the accuracy or completeness of the information provided in its articles. The information does not constitute financial or investment advice. Readers are encouraged to conduct their own due diligence or speak to a licensed financial advisor before making any investment decisions.

Please Fill Out The Form Below

Please Fill Out The Form Below

Please Fill Out The Form Below