The financial industry is rapidly removing humans from the payment chain while still operating under legal and compliance systems built for human decision-making. Slash Financial, a financial platform that focuses on vertical banking, stablecoin payments, corporate cards, and treasury, recently announced an AI Agent called “Twin” to automate workflows.
The Human Is Leaving the Payment Chain
Along with the April release of Twin, Slash announced $100 million in Series C Funding, some of which will go to powering the new AI platform. Twin is not simply a glorified data processor, but a proactive AI agent within the growing space of agentic commerce. It can autonomously initiate payments directly from business accounts without direct human involvement.
Slash, a privately held startup now valued at $1.4 billion, has grown by leaps and bounds since its 2021 launch as a fintech serving sneaker resellers. However, with the introduction of Twin, the question of liability arises, as it does for several other players in the neobank space employing autonomous AI agents. Who’s ultimately responsible when an AI agent moves money, and how are permissions and fraud controlled?
Agentic Commerce Isn’t Coming. It’s Here.
Agentic commerce is seen by those in the space as inevitable. The narrative, shaped by those who stand to benefit, goes that it’s best to adopt this new technology now, or be left playing catch-up for years.
Espousing the inevitability narrative, Slash’s website proclaims:
“Businesses shouldn’t be stuck debating whether AI agents will take over financial workflows—they will. The real risk is waiting too long to adapt. The companies that move early will shape how this shift plays out, while the rest will be left trying to catch up after it’s already underway.”
The Push to Adopt Before Anyone Understands It
They are not alone in this sentiment. Many retailers and digital media companies are already using AI agents in various capacities. Amazon has used agents to sell items not available on Amazon, while vigorously fighting third-party AI agents shopping on Amazon. The fight isn’t just for dollars–it’s for data.
The movement of AI agents is far more opaque to retailers than that of flesh-and-blood customers. This leaves less data with which retailers have to target business operations or to sell to third parties.
The Real Fight Is Over Data
Since February, eBay has outright banned third-party AI ‘buy for me’ agents from its site. Like Amazon, they have heavily invested in their own proprietary AI agents and want to keep customers siloed in their ecosystem, and not bounce around the internet via an AI surrogate.
How the battle between AI agents, like Slash’s Twin, and proprietary agents, like Amazon’s Rufus, will play out is unclear. However, many see online retailers eventually being forced to come to the table with the groundswell of third-party AI agents.
Retailers Want AI. Just Not Yours.
The consulting firm McKinsey put out a report on agentic commerce in October of 2025. Lareina Yee, director of technology research at the McKinsey Global Institute, had this to say:
“This is not a wait-and-see moment. Before long, nearly all retailers will have to grapple with the fact that a significant percentage of their customers will not be human users but rather AI agents. The challenge will be to get out in front of it now, before your rivals do. The companies that move first, even in small ways, will be the ones that help shape the future.”
The Payment System Was Built for Humans
In digital commerce, the industry standard for processing transactions is the four-party model: the customer, the seller, the issuing bank, and the acquiring bank. Payment networks, such as Visa or Mastercard, facilitate the process.
But now, with the introduction of AI agents, there is a fifth member of the party. Given the speed at which these new autonomous actors have been injected into neobanks and e-commerce, the laws are foggy at best regarding liability.
The Law Hasn’t Caught Up
While companies like Slash brag about what AI agents can offer in regard to efficiency gains and scalability, less is said about compliance, besides mentioning that all transactions are audited. That’s all fine and good, but Slash is not a bank and is not subject to the same fiduciary standards as a bank. If things go sideways, best of luck.
After the collapse of the fintech Synapse in 2024, there was a push to introduce clarity and regulations to the burgeoning fintech industry. Those efforts have floundered in the past year and a half, while fintechs have added on new AI-powered byzantine layers to their offerings.
Synapse Was the Warning Shot
Given Slash Financial’s explosive growth in the past few years, they’re clearly doing something that customers like. But what happens when Twin, or a similar AI agent, goes rogue or hallucinates? A legal gray zone. That’s how the McKinsey report cited above refers to accountability within agentic commerce.
“When an AI agent makes a poor decision, determining accountability is complex. Who is to blame for that faulty transaction? The platform that developed the model? The brand that deployed the agent? The user who approved it? Currently, there is no global consensus on responsibility.”
What Happens When the AI Gets It Wrong?
There is also the risk of a snowball effect, in which an error made by one autonomous agent influences the decision-making of interconnected agents, thus creating risks that can grow exponentially. Certainly, no fintech or neobank wants that, but when everyone is jockeying for position in a space governed only by an outdated legal framework, the race to the top of the mountain could cause an unintended landslide.
Just look at the Synapse collapse. It’s been almost two years, and still tens of millions of dollars–regular people’s savings–are inexplicably lost. That was before autonomous AI agents were opening bank accounts and issuing corporate cards. As fintech races toward autonomous finance, the biggest unanswered question may not be what AI agents can do, but who is left holding the bag when they inevitably make a costly mistake.
Author: Tim Tolka, Senior Reporter
The editorial team at #MRKT3.0 has taken all precautions to ensure that no persons or organizations have been adversely affected or offered any sort of financial advice in this article.
See Also:
Wall Street Is Using AI as Cover for Mass Layoffs
Why AI Super PACs Are Avoiding the Word “AI” at All Costs
“The AI Layoff Trap”: Congratulations, You’ve Been Automated.
