Tech Content
18 min read
Contents:
  • The Integration Problem Nobody Talks About
  • Algorithms Learn the Wrong Lessons
  • The Hallucination Problem
  • The Legacy System Trap
  • Compliance: Where AI Meets Immovable Objects
  • Why Banking Gets Harder With AI
  • What AI Cannot Create
  • Crisis: When Pattern Recognition Fails
  • What Actually Works vs. What's Stuck
  • Strategic Innovation: Still Human Territory
  • The Irony
  • What This Means for Financial Services Leaders
  • Getting Started

A bank executive recently told me their institution spent $30 million on an AI credit decisioning system that's been "almost ready for deployment" for eighteen months.

Another fintech founder admitted their fraud detection model works brilliantly in testing but generates so many false positives in production that customers are threatening to leave. A third company discovered its automated compliance system was confidently flagging legitimate transactions while missing actual violations.

These aren't outliers. Recent data shows that 70% of AI implementations in banking have no measurable ROI. The problem isn't the algorithms or the investment.

Banks poured $30 billion into AI in 2025. Capabilities double every 100 days. Yet only 26% of customers trust AI with their financial data, and 63% still demand human conversations for important decisions.

Building AI for finance isn't like building AI for retail or social media. You can't experiment with other people's money. You can't A/B test fraud detection. You can't iterate on credit decisions and see what happens.

If Netflix recommends the wrong movie, users scroll past it. If an AI fraud system fails, millions disappear. If a credit algorithm is biased, people lose homes and institutions face federal investigations.

The difference changes everything about what AI can and cannot do in financial services.

The Integration Problem Nobody Talks About

Every payment processor, lender, and bank runs on different data formats, proprietary APIs, and incompatible systems. Your payroll system doesn't talk to your banking app.

Your expense management tool can't coordinate with accounting software. Payment processors have zero visibility into cash flow forecasting.

This creates a paradox: AI agents need unified data streams to function, but financial infrastructure is deliberately fragmented for security and competitive reasons.

Consider what happens when a fintech tries to deploy an agentic AI system:

Week 1: The model trains beautifully on test data
Week 4: Integration with the first payment processor takes three times longer than expected
Week 8: The second processor uses completely different field names for the same data
Week 12: A third system requires manual data transformation before AI can even read it
Week 16: The team realizes they're spending more time building translation layers than training models

Even with open banking initiatives gaining traction, the Model Context Protocol and similar frameworks remain emerging technologies rather than established infrastructure. Solving these integration challenges requires specialized expertise connecting payment processors, KYC/AML systems, and banking APIs into functional platforms.

The organizational cost compounds the technical challenge. Financial institutions deploying AI aren't just installing software. They're rewiring how decisions get made, how data flows, and how employees interact with systems.

This requires:

  • Massive retraining programs
  • New governance frameworks
  • Cultural shifts in institutions where "that's how we've always done it" is a feature, not a bug

Federal Reserve research confirms what many banks discover: productivity often drops initially as teams struggle to adapt, legacy processes collide with automated workflows, and organizations discover that powerful AI tools mean nothing if people don't trust them.

Algorithms Learn the Wrong Lessons

AI doesn't create bias. It scales it with terrifying efficiency.

Hello Digit's automated savings algorithm provides a perfect example. Designed to help users save money without causing overdrafts, the system repeatedly caused exactly the problem it promised to prevent. The CFPB found nearly 70,000 overdraft reimbursement requests since 2017. The $2.7 million penalty wasn't just for the overdrafts but for the disconnect between marketing promises and algorithmic reality.

The deeper problem: AI learns from historical data that reflects decades of discriminatory human decisions.

Investigations into Freddie Mac and Fannie Mae's mortgage approval algorithms revealed racial discrimination patterns baked into the models. Not because engineers programmed bias, but because training data reflected historical redlining, discriminatory lending practices, and systemic inequalities the AI learned to replicate.

When an AI system trains on decades of biased credit decisions, it doesn't see discrimination. It sees patterns that optimize for historical outcomes. The result? Continued underservice to minority applicants through mathematical precision rather than malice.

Proxy discrimination makes this worse. Even after removing race, gender, and protected characteristics from models, AI finds correlations that serve as stand-ins:

  • ZIP codes correlate with higher default rates (because those areas were historically marginalized)
  • Employment history serves as a proxy for socioeconomic status
  • Education credentials encode class and race signals
  • Shopping patterns reveal demographic information
  • Social media activity provides additional proxies

The algorithm effectively discriminates without ever seeing the protected variables directly. Implementing robust fraud prevention and KYC systems requires careful attention to bias mitigation at every stage.

Courts have held that choosing to deploy an AI system producing discriminatory outcomes is legally equivalent to implementing a discriminatory policy, regardless of intent. Financial institutions can't hide behind "the algorithm made that decision" as a defense.

The Hallucination Problem

AI hallucinations (when models generate plausible but completely incorrect information) might be acceptable quirks in consumer applications. In fintech, they're compliance nightmares triggering regulatory violations, customer losses, and federal investigations.

The problem isn't occasional mistakes. It's that AI makes mistakes with the same confidence it displays when correct, presenting:

  • Fabricated loan calculations
  • Invented regulatory requirements
  • Fictional investment performance data

All delivered in authoritative tones that fool even experienced professionals.

NatWest partnered with IBM specifically to train their AI assistant Cora+ with safeguards to avoid the pitfalls of open AI models. The bank understood that wrong account balances are annoying, but hallucinated interest rates, loan terms, or compliance requirements expose institutions to massive liability.

One wrong calculation doesn't affect one mortgage. It affects every downstream process depending on that calculation:

  1. Credit risk assessment
  2. Capital reserve requirements
  3. Secondary market valuations
  4. Regulatory reporting

When Wells Fargo's mortgage modification underwriting tool had a calculation error in 2018, it didn't just make a mistake. It made more than 500 people lose homes and denied hundreds more the loan modifications they qualified for.

The Bank of International Settlements warns that AI hallucinations represent a critical challenge in balancing speed and accuracy. While AI's ability to enable rapid, real-time analysis is valuable for timely decision-making, this speed must be tempered by efforts to mitigate risks like hallucinations, biases, and vulnerabilities.

Large language models are designed to generate plausible-sounding text, not necessarily accurate text. This makes them fundamentally unsuited for financial contexts where "sounds right" and "is right" must be identical.

The Legacy System Trap

Here's the irony: institutions with the most resources to invest in AI run on technology infrastructure from the 1980s.

Only 52% of banks have fully implemented basic chatbots and biometric security (technologies predating the current AI boom by years). The bottleneck isn't algorithmic sophistication. It's the unglamorous work of modernizing core systems designed when "real-time" meant same-day processing.

aren't features you can bolt onto legacy mainframe systems. Modernizing legacy systems and assessing technical debt has become critical for institutions trying to deploy AI at scale.

Why fintechs gain market share: Not because they have better AI algorithms. Because they're built on modern cloud-native architectures from day one.

A digital bank founded in 2020 doesn't need to figure out how to make AI talk to a mainframe predating the internet. It deploys machine learning models directly into microservices architectures designed for exactly that purpose.

The competitive advantage isn't the AI itself (you can buy state-of-the-art models from OpenAI, Anthropic, or Google). It's having infrastructure that can actually use those models in production without layers of middleware translation introducing latency, errors, and security vulnerabilities.

The human factor: As one banking technology veteran observed after 30 years in the industry, "the largest part of the delay in getting better automation was training and skilling up the branch staff."

Technology has never been the real bottleneck. Training humans to use automation has been. AI doesn't solve this problem because it creates new complexity. Now you need employees who understand how to oversee AI decisions, explain algorithmic outputs to customers, and know when to override automated recommendations.

Compliance: Where AI Meets Immovable Objects

The regulatory landscape for AI in fintech is a minefield where rules are still being written, enforcement is accelerating, and the tension between AI's experimental nature and compliance's demand for certainty creates an unsolvable dilemma.

EU AI Act requirements for "high-risk" systems:

  • Explain exactly how AI reached every decision
  • Prove the model doesn't discriminate
  • Demonstrate continuous monitoring for drift and degradation
  • Build governance infrastructures around AI systems
  • Implement model audit tooling
  • Create bias detection pipelines
  • Maintain documentation frameworks

This isn't a checkbox exercise. It requires entirely new organizational structures.

DORA (Digital Operational Resilience Act) went into effect in early 2025, strengthening IT risk management by requiring:

  • Real-time incident detection
  • Comprehensive vendor audits
  • Traceability across infrastructure

Banks can't just deploy AI and hope it works. Monitoring systems must detect when models degrade. Vendor management frameworks must ensure third-party AI services meet regulatory standards. Incident response capabilities must identify and remediate AI failures before they become systemic problems.

The multi-jurisdiction nightmare: What complies with EU regulations might not satisfy UK requirements post-Brexit. What works in Singapore's regulatory sandbox might be illegal in the United States.

Financial institutions operating across borders navigate a maze where:

  • UK's DORA framework differs from the EU version
  • US CFPB takes a harder line on algorithmic accountability than most international regulators
  • Emerging markets each develop their own AI governance standards

Only 9% of UK bank executives feel prepared for upcoming AI regulations.

The fundamental tension: AI moves fast. Regulation moves slow. Compliance requires certainty that AI systems struggle to provide.

An AI model that learns and adapts continuously is powerful but inherently unpredictable. How do you certify a system that's different today than yesterday?

Financial institutions discover they need to slow down AI deployments, freeze model versions for regulatory review, and accept that cutting-edge AI capability will always be faster than cutting-edge compliant AI deployment.

Why Banking Gets Harder With AI

Every new AI system requires constant retraining, change management initiatives, and cultural adaptation in an industry where stability and predictability are fundamental values.

Financial institutions aren't just deploying new software. They're asking employees who've spent decades mastering one way of working to change how they make decisions, interact with customers, and assess risk while maintaining the same standards of accuracy and compliance.

The result? Organizational exhaustion where staff spend more time learning new AI tools, attending training sessions, and adapting to algorithmic recommendations than they save through automation.

This resistance isn't irrational. It's a rational response to the clash between banking's risk-averse culture and AI's experimental nature.

Grasshopper Bank articulates this tension: "AI recommendations will never replace human judgment," especially in lending and due diligence where the CTO stresses that while AI might assist in portfolio monitoring, the final credit approval decision must always be human.

Their caution about using generative AI in risk and credit decisioning stems from a fundamental incompatibility: modern generative models often function as "black boxes," and that lack of transparency could undermine both trust and the auditability that regulators require.

Klarna's reversal tells the story. Initially boasting their chatbot did the work of 700 employees and would improve profits by $40 million, CEO Sebastian Siemiatkowski eventually acknowledged: "From a brand perspective, a company perspective, it's so critical that you are clear to your customer that there will always be a human if you want."

The company lifted an 18-month hiring freeze and began recruiting for customer service roles again. Customers don't just want problems solved. They want the option of human contact, especially when dealing with financial stress, disputed charges, or complex situations where algorithmic responses feel inadequate.

What AI Cannot Create

AI works best when the present looks like the past. Could an AI have invented:

  • Buy-now-pay-later by analyzing historical payment data?
  • Prediction markets by studying trading patterns?
  • Embedded finance by examining banking trends?
  • Stablecoins by reviewing currency history?

No. These innovations required human insight to recognize unmet needs that weren't visible in any dataset.

AI can optimize mortgage approval processes. But a human had to invent the mortgage. The distinction matters because fintech's most valuable innovations come from seeing what customers need before they know to ask for it.

An algorithm analyzing 2005 transaction data would never predict that people wanted to:

  • Split restaurant bills through an app
  • Rent out homes to strangers
  • Invest spare change automatically

None of these behaviors existed in the training data.

The 2025 explosion of prediction markets, the GENIUS Act creating frameworks for stablecoins, and the embedded finance revolution were conceived by humans who understood changing customer behavior, regulatory opportunities, and technological possibilities in ways historical data couldn't reveal.

AI excels at scaling these innovations once they exist (optimizing pricing, improving user matching, detecting fraud in new payment types). But the creative spark asking "what if we did this completely differently?" remains uniquely human.

Crisis: When Pattern Recognition Fails

Financial crises are defined by being unlike anything before. The 2008 housing crisis, COVID-19 economic shock, and 2023 banking crisis shared one characteristic: they didn't resemble anything in training data.

AI trained on decades of market stability cannot advise clients during instability because its operating principle (future patterns will resemble past patterns) breaks down exactly when customers need guidance most.

MIT Sloan research confirms that AI models work best when the present looks like the past, meaning they struggle precisely during the moments that define financial careers and customer relationships.

Federal Reserve Governor Michael Barr warns that AI-powered trading algorithms pursuing profit maximization may result in "tacit collusion, market manipulation, or trading strategies that result in significant market volatility or even systemic risk."

When multiple institutions deploy similar AI models trained on similar data, they create dangerous feedback loops:

  • Algorithmic trading moves markets
  • Automated risk management triggers simultaneous selloffs
  • AI-driven liquidity management withdraws capital exactly when markets need it

The limitation is structural. Unprecedented events lack historical precedent. No amount of data prepares an AI system for genuine novelty.

Human advisors bring experience, intuition, and the ability to reason by analogy across different contexts. AI can only ask "have I seen this exact configuration before?"

What Actually Works vs. What's Stuck

The gap between AI hype and reality becomes clear examining what's deployed in production versus perpetually "coming soon."

Working in production:

  • Fraud detection (95% accuracy with acceptable false positives)
  • Basic chatbots handling low-stakes queries
  • Transaction monitoring flagging suspicious activity
  • Document processing automation

These succeed because they're narrow, well-defined problems with clear success metrics and acceptable error rates.

Stuck in pilot purgatory:

  • Credit decisioning without human review
  • Autonomous compliance monitoring
  • Fully automated wealth management
  • Unsupervised lending decisions

These require reliability, explainability, and accountability current AI cannot consistently deliver.

Banks report that 70% of AI use cases don't have reported outcomes or measurable ROI. Not because technology doesn't work in lab conditions, but because moving from "works in a controlled test" to "works reliably at scale under regulatory scrutiny with real customer money" is where most AI projects die.

The pattern: AI succeeds when supporting human decisions (flagging suspicious transactions for review, surfacing relevant customer data, automating document processing) but struggles when replacing human judgment entirely.

Goldman Sachs CEO David Solomon wants to "completely reimagine" processes with AI but acknowledges "that doesn't mean we will have less people." Reimagining work with AI differs from eliminating workers with AI.

Citigroup assesses where AI fits into more than 50 of its "largest and most complex processes" from KYC to loan underwriting, but implementations focus on driving "new sources of efficiency" rather than full automation.

The distinction matters: efficiency means AI handles parts of workflows while humans retain decision authority. Automation means AI owns the entire process end-to-end. The former is achievable today. The latter remains aspirational for high-stakes financial decisions.

Strategic Innovation: Still Human Territory

While AI optimizes existing processes, humans create strategic breakthroughs defining competitive advantage:

  1. Identifying emerging customer needs before data shows them
    Someone had to notice millennials delaying home purchases and imagine peer-to-peer lending before transaction data revealed demand. Someone had to recognize gig workers needed different banking products before their banking behavior created clear signals.
  2. Creating new regulatory frameworks
    The GENIUS Act for stablecoins and EU's MiCA regulations required human negotiation, political strategy, and ability to envision regulatory structures for financial instruments that didn't fully exist yet. AI cannot lobby regulators, build coalitions, or craft policy language balancing innovation with consumer protection.
  3. Building trust in new paradigms
    Getting customers to accept digital-only banking, convincing merchants to adopt new payment rails, and persuading regulators that decentralized finance can be safe requires human credibility and relationship capital algorithms cannot accumulate.
  4. Forming partnerships and ecosystems
    When Block, Anthropic, and OpenAI formed the Agentic AI Foundation to establish open standards, they weren't solving a technical problem. Creating ecosystems requires partnership negotiation, shared vision alignment, and collective standard-setting demanding human judgment.

The strategic work of deciding which technologies to standardize and which to keep proprietary, determining when to compete and when to collaborate, navigating geopolitical considerations in cross-border payments, and balancing stakeholder interests across customers, regulators, shareholders, and employees. All of these decisions require human judgment synthesizing technical possibility, market conditions, regulatory constraints, and organizational capability in ways that training data cannot capture.

The Irony

The AI in the fintech market reached $30 billion in 2025 and is projected to hit $97.7 billion by 2034. Every breakthrough enabling that growth was conceived by humans, not algorithms.

AI didn't imagine agentic commerce protocols powering payments across platforms. Humans at OpenAI, Google, and payment networks designed them. AI didn't create regulatory frameworks making stablecoin adoption possible. Humans negotiated with lawmakers and regulators. AI didn't envision embedded finance transforming how consumers interact with financial services. Product leaders recognized the opportunity before usage data existed.

The $200-340 billion in annual value generative AI could add to global banking comes from AI's ability to execute and scale human-conceived strategies, not from inventing those strategies.

McKinsey research suggests that unlocking personalization at scale could create $1.7-3 trillion in global value for banking. But "personalization at scale" means using AI to deliver human-designed experiences to millions of customers simultaneously, not having AI design new customer experiences from scratch.

The winner's formula isn't "best AI wins."

It's "best combination of human strategic insight and AI execution wins." Companies asking "what can AI do?" solve the wrong problem. The real question is "what should we build, and how can AI help us build it faster, cheaper, and at greater scale than humanly possible?"

The first question is about the tool. The second is about strategy. Strategy remains stubbornly human.

What This Means for Financial Services Leaders

Fintech leaders of 2030 won't be those with the best algorithms. They'll best combine AI execution with human innovation, understanding technology's greatest power lies not in replacing human judgment but in scaling it.

AI can process millions of transactions, detect thousands of fraud patterns, and personalize experiences for billions of customers simultaneously. But it cannot handle the decisions that actually matter:

  • The unprecedented crisis not matching historical patterns
  • The nuanced judgment call requiring context no algorithm can capture
  • The ethically complex trade-off demanding values-based reasoning rather than optimization

As banking leaders increasingly recognize, AI cannot replace human judgment because while it can support critical thinking, ethical reasoning, and contextual awareness, it can never fully replicate those elements of human experience.

The winning model: Humans for strategy, innovation, judgment, and relationship building. AI for execution, optimization, pattern recognition, and scale.

Microsoft's research on Frontier Firms shows organizations embedding AI agents across workflows while maintaining human judgment report returns roughly three times higher than slow adopters. Not because they eliminated people, but because they redesigned work to leverage both human and machine intelligence optimally.

The infrastructure requirements, regulatory complexity, bias risks, and innovation limitations we've explored aren't problems to be solved. They're constraints to be understood and designed around.

Building AI for finance means accepting:

  • You cannot experiment with other people's money
  • Hallucinations are unacceptable rather than quirky
  • Compliance comes before capability
  • The most valuable innovations will continue to come from humans seeing possibilities no training data contains

The question isn't whether AI will transform fintech (it already has). It's whether financial institutions will build the foundations, governance frameworks, and human-AI collaboration models capturing AI's benefits while avoiding its pitfalls.

An industry spending $30 billion on AI still depends entirely on human insight for breakthroughs that matter most. That's not a bug in the technology. It's the fundamental nature of innovation itself.

Getting Started

Understanding what AI can't do matters less than knowing what you should build. Whether you're exploring AI for your fintech platform, redesigning workflows around human-AI collaboration, or need guidance building compliant, scalable infrastructure, having the right partner changes everything.

Contact Softjourn's fintech consulting team to discuss your development strategy, explore collaboration opportunities, or get expert advice on turning AI's limitations into your competitive advantage.