For a decade, fintechs sold a story: traditional credit scoring is biased and outdated, and machine learning will fix it by considering thousands of alternative data points. Some of that is true. The trouble is that swapping one black box for a more sophisticated one doesn’t satisfy lending laws written in the 1970s, and regulators have noticed.
The result is a quiet but consequential collision between the AI-underwriting industry and the agencies tasked with making sure credit decisions are explainable, non-discriminatory, and contestable. The lenders are losing more rounds than they expected.
The federal pressure point
The Consumer Financial Protection Bureau has spent the last three years sharpening its position on algorithmic underwriting. Its core argument is straightforward: the Equal Credit Opportunity Act requires lenders to give applicants specific, accurate reasons when credit is denied, and “the model said no” is not a reason. In a 2023 circular, the CFPB explicitly stated that lenders can’t hide behind model complexity. If you can’t explain a denial, you can’t legally issue one. That position threatens the entire deep-learning underwriting stack, because many of these models are not interpretable in the sense the law demands. The CFPB has also begun examining whether alternative data sources, like cashflow, browsing patterns, or social signals, function as proxies for race or national origin. Where they do, the model is illegal regardless of intent.
State agencies are getting aggressive
Federal action gets the headlines, but state regulators have moved faster. New York’s Department of Financial Services has issued guidance treating algorithmic discrimination as a fair-lending violation enforceable under state law. California’s DFPI has signaled similar intent, and Colorado passed legislation in 2024 requiring impact assessments for high-risk AI systems, including consumer lending. State attorneys general are also using consumer protection statutes to attack opaque underwriting, and they don’t need new authority to do it. The patchwork creates real headaches for national lenders, who can no longer assume that a model approved by their compliance team in Delaware will pass muster in Albany. Several fintechs have quietly retreated from states with the most aggressive enforcement postures, which is itself evidence of the pressure working.
What lenders are doing now
The serious players have stopped fighting and started investing in explainable AI. SHAP values, counterfactual explanations, and constrained model architectures are becoming standard, not because lenders love them, but because regulators do. Some firms have pivoted from pure deep learning to hybrid models that use ML for risk ranking but require interpretable rules for the actual approve/deny decision. The compromise is honest: keep the predictive power where it’s allowed, but make the consequential output legible. Smaller fintechs that can’t afford the compliance buildout are partnering with bank sponsors who handle it for them, or quietly exiting credit products entirely. The era of “trust the algorithm” lending is ending faster than most observers predicted.
The bottom line
AI underwriting isn’t going away, but the version that wins won’t be the most accurate. It’ll be the most explainable. Regulators have made that bargain explicit, and the market is finally listening.
Leave a Reply