← Return to Blog Home

Mainframe Security for Insurance: How Insurers Protect Policyholder Data on IBM Z Without Code Changes

Lindsay Kleuskens

March 10, 2026

RACF controls access. Encryption protects storage. Neither protects policyholder data flowing to test, analytics, or AI. See how insurers close the gap.

Insurance carriers running policy administration, claims processing, and underwriting on IBM Z mainframes face a security gap that perimeter controls alone cannot close. 

The Resource Access Control Facility (RACF) manages who can access mainframe resources. IBM Pervasive Encryption protects data at rest on disk. Multi-factor authentication (MFA) hardens login flows.

These controls are necessary, but insufficient, as the vulnerability is at the data layer. 

Policyholder records, claims histories, health information, and financial details flow from the mainframe to test environments, analytics platforms, reinsurance partners, offshore claims teams, and AI underwriting models. 

In most insurance environments, that data arrives downstream in cleartext. 

Hence, agentless tokenization – i.e., deployed at the network layer, without modifying COBOL applications or installing software on z/OS – closes this gap by replacing sensitive fields with format-preserving tokens before data leaves the mainframe.

This guide:

  • Explains why traditional mainframe security solutions fall short for insurance carriers.
  • Outlines five mainframe security best practices for mainframe data protection, specific to the insurance industry.
  • Shows how a global insurer protecting 36 million policyholders across 11 countries solved the problem in production. 

If you are evaluating mainframe security solutions for your insurance data protection program, this is where to start.

What Is Mainframe Security and Why Does It Matter for Insurance Carriers?

Mainframe security is the combination of identity controls, data protection, monitoring, and compliance automation that protects IBM Z workloads and the data they process. 

IBM Z security starts with the platform itself  – i.e., hardware encryption, integrity verification, and isolation features built into every processor generation. 

But IBM Z security at the hardware level is only one layer. For a deeper look at the foundational control model, see our complete guide to mainframe security solutions.

Insurance is one of three industries, alongside banking and healthcare, most dependent on IBM Z for core operations. Large carriers process millions of policies, claims, and premium calculations through COBOL-based applications running on z/OS. 

The mainframe data protection challenge for insurers is scale and sensitivity: the data is among the most varied and regulated in any industry.

This article focuses on mainframe security for insurance, specifically: protecting policyholder data on existing IBM Z systems. It is distinct from mainframe modernization, which focuses on migrating or refactoring workloads to cloud platforms. 

The two overlap when modernization projects open new data pathways – e.g., feeding mainframe policy data into cloud analytics or AI underwriting models – that require protection. But the core challenge here is protecting the data, whether or not you are migrating.

Insurance data is also distinct from banking data in an important way. 

A bank protects cardholder data. 

An insurer protects everything simultaneously: 

  • Personally identifiable information (PII)
  • Protected health information (PHI) for health insurance lines
  • Financial records
  • Social Security numbers
  • Claims histories
  • Dependent minors' information
  • Underwriting risk scores. 

A single policyholder record can contain all of these. 

Each data type is also subject to different regulations, e.g., National Association of Insurance Commissioners (NAIC) model laws, the Health Insurance Portability and Accountability Act (HIPAA), state-specific privacy statutes, and the General Data Protection Regulation (GDPR) for multinational carriers.

This regulatory overlap makes the insurance mainframe security challenge structurally different from other verticals. 

Why Traditional Mainframe Security Falls Short for Insurers

The perimeter-first model (i.e., RACF, Pervasive Encryption, network segmentation) was designed for an era when mainframe data stayed on the mainframe. 

That era is over.

Modern insurance carriers push policyholder data to dozens of downstream systems. Each one represents an exposure point that traditional mainframe security controls were never designed to address.

The Access Control Gap

RACF, CA ACF2, and CA Top Secret are External Security Managers (ESMs) that control who can access mainframe resources. They are effective at authorization. They do not, by themselves, solve data exposure problems. 

An authorized claims adjuster accessing a policyholder record through CICS sees raw PII. A service account replicating data to a test environment sends raw production records downstream. The data is exposed to every system that has legitimate access to it.

The Encryption Blind Spot

IBM Z Pervasive Encryption is a powerful tool for mainframe encryption, i.e., protecting data at rest on Direct Access Storage Devices (DASD). 

But mainframe data encryption has a structural limitation: the data is decrypted in memory when accessed by any authorized application or query. 

A compromised privileged credential – or an authorized-but-overpermissioned process – accesses fully decrypted, sensitive policyholder information. Mainframe encryption protects the disk. It does not protect the data.

The Insurance-Specific Data Flow Problem

Consider where policyholder data travels once it leaves the mainframe:

  • Test, QA, and UAT environments. Insurers need realistic data to test policy administration system upgrades, claims processing changes, and pricing model updates. Raw production data in test environments creates compliance violations and breach risk.
  • Reinsurance data sharing. Treaty and facultative reinsurance requires sharing policyholder and claims data with third-party reinsurers.
  • AI and ML underwriting models. Actuarial and underwriting tools consume mainframe data for risk scoring, fraud detection, and dynamic pricing, often running in cloud environments.
  • Offshore claims processing. Third-party administrators (TPAs) and offshore teams access claims data through terminal sessions or APIs connected to the mainframe.
  • Cross-border operations. Multinational insurers must comply with data residency laws in every jurisdiction they operate, e.g., GDPR in Europe, LGPD in Brazil, PDPA in Southeast Asia.

None of the traditional perimeter controls follow the data to these destinations.

Recent insurance breaches confirm this pattern. 

In January 2023, attackers compromised 1.3 million Aflac cancer insurance policyholder records through a third-party vendor. 

In the same year, a contractor breach at Zurich Insurance Group exposed the information of over 757,000 automobile policyholders. 

In May 2024, a ransomware attack on Landmark Admin affected more than 800,000 individuals, compromising Social Security numbers, bank information, and health insurance policy data.

Each breach involved data that had left its primary protected environment.

How Mainframe Security Approaches Compare for Insurance

Approach What It Protects Code Changes Required Policyholder Data Protected in Test/Analytics? Cross-Border Compliance?
RACF / ACF2 / Top Secret Resource access None (configuration) No - data still exposed to authorized environments No
IBM Pervasive Encryption Data at rest on DASD None (dataset/volume level) No - data decrypted on access No
DB2 Native Encryption Data at rest in DB2 Minimal (DBA configuration) No - data decrypted by authorized queries No
Network-Layer Tokenization (Agentless) Data in transit and downstream systems None Yes - downstream receives tokenized data Yes - tokenized data is not PII under most frameworks
Application-Level Tokenization Data in transit Extensive (COBOL modification) Yes Yes

Traditional mainframe security solutions – e.g., RACF, Pervasive Encryption, DB2 encryption – protect storage and access. They are essential components of any IBM Z security program. But only the last two rows protect the data itself as it flows to downstream consumers.

For insurance carriers with decades of COBOL policy administration code that they cannot modify, application-level changes are impractical. 

Agentless tokenization deployed at the network layer is the viable path to mainframe data protection that follows policyholder data wherever it goes.

5 Mainframe Security Best Practices for Insurance Carriers

1. Discover and Classify Policyholder Data Across All Mainframe Data Stores

You cannot protect what you have not found. 

Data discovery for insurance mainframes must cover DB2 tablespaces holding policy records, VSAM files containing claims data, IMS hierarchies with underwriting information, and data flowing through CICS transactions to downstream systems.

Insurance data is uniquely complex to classify. 

A single policyholder record in a health insurance line contains PII (name, SSN, address), PHI (diagnosis codes, treatment history), and financial data (premium amounts, bank details). 

Each data type falls under a different regulatory regime. The same field (e.g., a policyholder's date of birth) is PII under state privacy laws, PHI when linked to a health claim under HIPAA, and personal data under the GDPR for a European policyholder.

Classification is not a one-time project. It is a continuous discipline that feeds protection policies, defining which fields get tokenized, which get masked, and which get encrypted, all based on the regulatory context of the data flow, not just the field type.

2. Protect Non-Production Environments with Format-Preserving Tokenization

Every insurance carrier maintains multiple non-production environments: test, QA, User Acceptance Testing (UAT), actuarial modelling, and training systems. 

Each needs realistic data to function. Policy administration upgrades, claims workflow changes, and pricing model validations all depend on data that behaves like production.

The problem is obvious: feeding raw production data into non-production environments puts policyholder PII, PHI, and financial records into systems with weaker access controls, broader user access, and less monitoring.

Format-preserving tokenization addresses this by replacing sensitive values with tokens that maintain data structure, table relationships, and geographic attributes. 

A tokenized Social Security number looks like a Social Security number. A tokenized policy number preserves its format and length. Referential integrity across policy-to-policyholder-to-dependent-to-claim-to-provider relationships remains intact.

What most people miss: generic masking and tokenization tools often break insurance data. Insurance has particularly complex relational structures – and a tool that distorts geographic attributes, disrupts inter-table relationships, or changes field formats will produce test data that is useless for validation. 

The global insurer case study later in this article shows exactly this failure mode, and how test data management done correctly preserves data utility while removing all real PII and PHI.

3. Implement Data-Centric Controls at the Network Layer

The defining characteristic of agentless mainframe security is where it operates: at the network layer, between the mainframe and every downstream consumer. This is what separates agentless tokenization from every other approach in the mainframe security solutions market.

An agentless tokenization appliance sits in the data path. It intercepts data flowing from z/OS to downstream systems – e.g., test environments, analytics platforms, reinsurance feeds, cloud AI models, offshore claims teams – and tokenizes sensitive fields in real time. 

No software is installed on z/OS. No changes to COBOL applications, DB2 schemas, CICS transactions, or IMS hierarchies. Zero MIPS consumption on the mainframe. 

This is mainframe data protection without mainframe modification.

For insurers, this deployment model matters more than in almost any other vertical. Policy administration systems are the single most change-averse applications in enterprise IT. 

They process renewals, issue policies, and manage claims on tightly scheduled batch cycles. Installing agents on z/OS introduces a risk that no CIO will accept during a renewal cycle. The network-layer approach eliminates this risk entirely.

Re-tokenization between environments adds another layer of security. Tokens generated for the test environment are distinct from tokens used in the analytics environment. 

A breach in one domain reveals nothing usable in another. For a deeper look at protecting data across these flows, see our guide to mainframe-to-cloud data pipeline security.

4. Enforce Data Residency for Cross-Border Insurance Operations

Multinational insurers operate across jurisdictions with different data residency requirements. 

  • A policyholder in Germany is subject to the GDPR. 
  • A policyholder in the United States is subject to state-specific laws, e.g., the NAIC Insurance Data Security Model Law, the California Consumer Privacy Act (CCPA), and the New York Department of Financial Services (NYDFS) Cybersecurity Regulation. 
  • A policyholder in Brazil is covered by the Lei Geral de Proteção de Dados (LGPD).

Each framework imposes restrictions on where personal data can be stored and processed. 

For carriers operating across 10–50 countries, this creates an operational bottleneck: centralized analytics, actuarial modelling, and AI underwriting workloads require data from multiple jurisdictions. Moving that data triggers compliance obligations in each one.

Tokenization before data crosses borders removes the regulatory trigger. 

Under most privacy frameworks, tokenized data – i.e., where sensitive values have been replaced with tokens and the token-to-value mapping is held in a vault in the originating jurisdiction – is not personal data. This enables cross-border data transfers without jurisdiction-by-jurisdiction legal review for every analytics project.

The result is faster innovation velocity. 

  • Actuarial teams can build models using data from all operating jurisdictions. 
  • AI underwriting tools can train on a global dataset. 
  • Third-party analytics partners can work with tokenized data without requiring data processing agreements for handling personal data.

5. Audit Continuously, Reduce Scope Through Tokenization

Insurance regulators increasingly expect continuous mainframe compliance monitoring, not annual assessment cycles. 

The NAIC Insurance Data Security Model Law (MDL-668) – now adopted by more than 20 US states – requires risk-based insurance data protection programs proportional to the sensitivity of the data held. 

The NYDFS Cybersecurity Regulation (23 NYCRR 500) mandates access controls, encryption, and risk assessments with specific applicability to insurance carriers operating in New York.

Complying across multiple frameworks – i.e., NAIC, HIPAA for health insurance lines, state privacy laws, GDPR for European operations – creates a multiplicative audit burden for insurance data protection teams. Each framework has its own control requirements, reporting cadences, and evidence standards.

Tokenization collapses this burden. Systems that process only tokenized data (where real policyholder PII and PHI never appear) are removed from regulatory assessment boundaries under each framework simultaneously. Fewer systems in scope means fewer controls to implement, fewer audit hours, and lower compliance costs.

Complement data-level controls with automated monitoring. 

SMF-based tools like BMC AMI Security and IBM zSecure detect anomalous access patterns and feed events to your Security Information and Event Management (SIEM) platform. 

The combination (e.g., continuous monitoring for access anomalies plus data-level tokenization to eliminate exposure) is the strongest mainframe security posture available.

Case Study: How a Global Insurer Protects Policyholder Data Across 11 Countries

One of the world's largest insurers operates across 11 countries and serves 36 million policyholders across insurance, benefits, wealth, banking, and other lines of business.

The carrier required a consistent way to protect production data before it entered test, QA, UAT, and training environments. An existing solution had already been deployed. It failed. 

The tool distorted geographic and relational attributes, disrupted relationships between tables, and caused data to stop behaving like production data. Downstream systems broke. Test results became unreliable. The solution created more problems than it solved.

The insurer selected DataStealth to deliver a test data management platform that could remove identifiable information without compromising the integrity, structure, or usability of the underlying data.

The result: policyholder PII was de-identified across all non-production environments. 

The data relationships – i.e., policy-to-policyholder-to-dependent-to-claim-to-provider – remained intact. Likewise, the geographic attributes were preserved, while format and length were maintained. Downstream applications processed the tokenized data identically to production data, without ever handling real PII or PHI.

This matters because it addresses the deal-breaker question every insurance CISO asks: Can you protect mainframe data without breaking the applications that depend on it? The answer, proven across 36 million policyholder records in 11 countries, is yes.

Read the full case study: How a Global Insurer Protects Sensitive Data in Non-Production Environments →

What Changed in Insurance Mainframe Security in 2026?

NAIC Model Law Adoption Continues to Expand

The NAIC Insurance Data Security Model Law (MDL-668) has been adopted by more than 20 US states, requiring insurers to implement data security programs with risk assessments, access controls, and incident response procedures proportional to the sensitivity of the information they hold.

NYDFS Amendments Raised the Bar

The NYDFS Cybersecurity Regulation (23 NYCRR 500) amendments strengthened requirements for MFA, access privilege controls, and encryption of non-public information – with specific applicability to insurance carriers operating in New York.

AI in Underwriting Creates New Data Exposure

Insurers feeding mainframe policyholder data into cloud-based AI models for risk scoring, fraud detection, and dynamic pricing face a new attack surface. 

Shadow AI adds another dimension: employees using unapproved AI tools with policyholder data create exposure that neither security monitoring nor access controls can see.

IBM Z Telum Processor Enables Real-Time Fraud Detection

The AI-enabled Telum processor performs real-time anomaly analysis during mainframe transactions, complementing data-level protections with behavioural detection.

Post-Quantum Cryptography Planning is Underway

Insurance carriers hold policyholder data for decades – i.e., life insurance policies, long-tail liability claims, pension records. 

Current mainframe data encryption standards are vulnerable to future quantum threats. IBM Z Crypto Express8S cards support quantum-safe algorithms, and carriers with long data retention requirements are beginning to plan their migrations. 

For z/OS security teams, this means evaluating which mainframe data-encryption workflows should transition to quantum-resistant methods first.

The Mainframe Skills Gap is Acute

RACF specialists are retiring faster than replacements are trained. 

Insurance-specific mainframe knowledge (e.g., IMS, CICS, batch processing for renewal cycles) is harder to replace than generic z/OS security skills. 

Agentless approaches that require no z/OS expertise reduce dependency on a shrinking talent pool. They also simplify mainframe compliance by eliminating the need for z/OS-level configuration changes to achieve COBOL data protection.

How DataStealth Protects Insurance Mainframes

  • Discovery without agents. Find policyholder PII, PHI, and financial data flowing from IBM Z to downstream systems, without installing software on z/OS or consuming MIPS.
  • Tokenization without code changes. Replace sensitive policyholder fields with format-preserving tokens that maintain referential integrity across policy-claim-provider relationships. COBOL applications remain untouched.
  • Safe non-production environments. Feed test, QA, UAT, and actuarial modeling environments with tokenized data that behaves like production, without real PII or PHI.

Cross-border compliance by design. Tokenize policyholder data before it crosses jurisdictions to enable centralized analytics and AI underwriting without triggering data residency violations.

See how agentless mainframe data protection works for insurance carriers. Request a demo →

Frequently Asked Questions: Mainframe Security for Insurance

About the Author:

Lindsay Kleuskens

Lindsay Kleuskens is a data security specialist helping enterprises reduce risk and simplify compliance. At DataStealth, she supports large organizations in protecting sensitive data by default, without interrupting user workflows.