NYDFS drafts circular letter regarding the use of data and AI in insurance
The New York Department of Financial Services (NYDFS) is the agency responsible for regulating financial services and products sold in New York and enforcing the New York State laws which apply to their providers. The agency recently issued a proposed circular letter regarding the use of artificial intelligence and consumer data in insurance and has requested public feedback on its contents. It builds on a previous, much vaguer circular letter issued in 2019, which pertains only to life insurance providers.
The new circular letter does not change any existing state laws, and is not legally binding, but clarifies the NYDFS’s interpretation of existing laws. In other words, this is a guide for companies on how to remain compliant and in the department’s good graces. Since New York is such an important market for national companies, these guidelines will carry significant weight in the industry.
The letter is similar, then, in scope and purpose, to the last piece of insurance regulation we discussed: Colorado’s draft Algorithm and Predictive Model Governance Regulation (which has since been finalized, as well as expanded on in new proposed quantitative testing guidelines–stay tuned for our thoughts on those!). This post highlights the similarities and novelties in New York’s approach compared to Colorado’s, so if you haven’t read our previous analysis, which also provides some context and background for the issue of data and algorithms in insurance, we encourage you to do so.
Scope
The NYDFS is responsible for regulating financial services and products sold in New York and enforcing the New York State laws which apply to them. The circular letter references a number of state laws pertaining to fairness and discrimination. One of these is Insurance Law Article 26, which prohibits entities subject to the agency’s supervision from denying coverage, charging premiums, and other actions “because of race, color, creed, national origin, or disability.” Another is Insurance Law § 4224(a), which prohibits “unfair discrimination between individuals of the same class and of equal expectation of life, in the amount or payment or return of premiums, or rates charged for policies of life insurance.”
The contents of the circular letter pertain specifically to the use of Artificial Intelligence Systems (AIS) and External Consumer Data and Information Sources (ECDIS) in underwriting and pricing. These terms are scoped in the letter broadly as anything that partially or fully automates a “traditional” underwriting or pricing process (emphasis ours):
ECDIS includes data or information used – in whole or in part – to supplement traditional medical, property or casualty underwriting or pricing, as a proxy for traditional medical, property or casualty underwriting or pricing, or to establish “lifestyle indicators” that may contribute to an underwriting or pricing assessment of an applicant for insurance coverage. For the purposes of this Circular Letter, ECDIS does not include an MIB Group, Inc. member information exchange service, a motor vehicle report, or a criminal history search.
AIS means any machine-based system designed to perform functions normally associated with human intelligence, such as reasoning, learning, and self- improvement, that is used – in whole or in part – to supplement traditional medical, property or casualty underwriting or pricing, as a proxy for traditional medical, property or casualty underwriting or pricing, or to establish “lifestyle indicators” that may contribute to an underwriting or pricing assessment of an applicant for insurance coverage.
Already, there is a subtlety that may distinguish these guidelines from Colorado’s by broadening its scope. AIS appears to plausibly refer to “non-traditional” algorithms (i.e. neural networks, random forests) applied to “traditional” underwriting inputs—for life insurance, this might include the results of blood work or other medical tests. Insurers and other interested parties are likely to ask for clarification on this point: can a system be AIS if it does not use ECDIS?
Colorado, on the other hand, only regulates the use of ECDIS, which they define—possibly more narrowly—as “a data or an information source that is used by a life insurer to supplement or supplant traditional underwriting factors or to establish lifestyle indicators that are used in insurance practices.” Our last post pointed out that Colorado’s draft guidelines did not appear to regulate novel methods applied to traditional data; this was confirmed in the finalized guidelines.
Another difference in scope between this letter and Colorado’s guidance is the inclusion of a caveat: “This Circular Letter … is not intended to address phases of the insurance product lifecycle other than underwriting and pricing.” This is an interesting carve-out, as anti-discrimination laws in other settings such as housing and credit are known to apply to advertising practices, which are increasingly becoming algorithmically targeted.
Testing Requirements
Like the Colorado guidelines, the NYDFS asks insurers to have a comprehensive approach to managing risk around AIS and ECDIS, including governance procedures, oversight boards, and documentation of decisions and changes. The NYDFS does not prescribe exactly what this approach should look like, acknowledging that “there is no one-size-fits-all approach to managing data and decisioning systems.” However, the letter offers fairly concrete do’s and don’ts to insurers regarding discrimination testing.
First, the letter puts the onus on insurers to have a particular discrimination testing and remediation procedure: "An insurer should not use ECDIS or AIS in underwriting or pricing unless the insurer can establish through a comprehensive assessment that the underwriting or pricing guidelines are not unfairly or unlawfully discriminatory in violation of the Insurance Law.” The “minimum” such assessment vaguely resembles the steps in a disparate impact lawsuit:
“Assessing whether the use of ECDIS or AIS produces disproportionate adverse effects in underwriting and/or pricing on similarly situated insureds, or insureds of a protected class.”
“If there is prima facie showing of such a disproportionate adverse effect, further assessing whether there is a legitimate, lawful, and fair explanation or rationale for the differential effect on similarly situated insureds.”
“If a legitimate, lawful, and fair explanation or rationale can account for the differential effect, further conducting and appropriately documenting a search and analysis for a less discriminatory alternative variable(s) or methodology that would reasonably meet the insurer’s legitimate business needs. If a less discriminatory alternative exists, the insurer should modify its use of ECDIS or AIS accordingly."
The language above—calling to search for a less discriminatory alternative—appears to be striking, new, and builds on recent literature in the field. The agency even suggests a number of quantitative metrics which could form the basis of a less-discriminatory-model search, such as adverse impact ratios, marginal effects, or standardized mean differences but stops short of favoring a particular statistic or method.
Of course, an insurer could choose to cherry pick whichever statistics favored their analyses—a weakness that we also pointed out in our analysis of the Colorado guidelines. However, since we wrote that post, Colorado has taken a much stronger stance, ever since they released their draft testing procedure guidelines, which have quantitative fairness criteria defined down to the p-value!
So What?
Whether the circular has teeth remains an open question. As Debevoise & Plimpton point out, “the Proposed Circular does not address how insurers should test for discriminatory impact on protected classes for which the insurer does not collect any data,” which is a criticism we raised on Colorado’s draft guidelines. Since we wrote that post, the Colorado guidelines have been narrowed to focus specifically on race, which can (and must, under the new draft testing guidelines) be estimated using Bayesian Improved First Name Surname Geocoding (BIFSG). Perhaps insurance companies successfully argued that they should not be forced to do all this testing unless they were also forced to collect protected class information in the first place. Will they make the same argument in New York?