Amit Das on How Think360.ai is Making AI-Driven Lending More Trustworthy, Transparent, and Regulation-Ready
📝Interviews
StartupTalky presents Recap'25, a series of exclusive interviews where we connect with founders and industry leaders to reflect on their journey in 2025 and discuss their vision for the future.
In this edition of Recap’25, StartupTalky speaks with Amit Das, Founder and CEO of Think360.ai, a leading AI-driven decision intelligence company powering underwriting, onboarding, and risk intelligence for BFSI institutions. As financial services become increasingly data-driven and tightly regulated, Think360.ai is building the infrastructure that helps lenders move beyond score-based models toward behaviour-aware, consent-led, and explainable AI decision systems.
In this conversation, Amit Das reflects on how 2025 marked a turning point in AI adoption for financial services, with the maturing Account Aggregator ecosystem and production-grade Generative AI reshaping underwriting and compliance. He discusses the shift from model accuracy to decision defensibility, the importance of trust-centric metrics like calibration, fairness, and stability, and how Think360.ai is preparing institutions for the DPDP era. He also shares the company’s global expansion strategy and why, in sensitive domains like finance, governance is not just compliance—it’s a long-term competitive advantage.
StartupTalky: Think360.ai powers critical functions like underwriting and risk intelligence. What was the most significant AI model or data integration milestone achieved in 2025 that directly enhanced decision-making for your BFSI clients?
Amit Das: In 2025, the inflection point was not a single model or dataset, but a structural shift in how decisions are engineered.
Two macro forces converged. First, the Account Aggregator (AA) ecosystem reached operational maturity, with materially higher consent success rates and richer cash-flow signals. Second, Generative AI moved from experimentation to production relevance, particularly in interpretability, document intelligence, and decision narratives.
Our most meaningful milestone was moving from model-centric accuracy to decision intelligence at scale. In practical terms, this meant designing underwriting systems that do three things simultaneously:
- optimise risk prediction,
- maintain regulatory defensibility,
- and remain stable under data growth and policy change.
We now evaluate bureau data, repayment history, AA-based cash-flow signals, income expense volatility, existing obligations, and behavioural indicators within a single underwriting graph. The key advancement was not adding variables, but normalising and sequencing them with explicit data lineage, so every input can be traced to its source, consent artefact, and timestamp.
From a risk lens, this reduced false confidence. From a business lens, it improved approval precision. For BFSI clients, the outcome has been a measurable shift away from score-only decisioning toward behaviour-aware underwriting, which aligns far more closely with realised repayment patterns while meeting audit and supervisory expectations.
StartupTalky: Beyond revenue and client count, what are the two or three non-obvious KPIs you track to measure the true predictive accuracy and ethical compliance of your AI models?
Amit Das: We optimise for trust in production, not headline accuracy in sandbox environments.
Three indicators matter most to us:
- Decision Stability We measure how sensitive outcomes are to small, non-material input changes. High volatility is an early indicator of brittle logic and governance risk. In regulated lending, unstable decisions are as dangerous as inaccurate ones.
- Calibration and Drift Calibration answers a simple but powerful question: Do borrowers assigned to a risk band behave like that band over time? Drift monitoring then tracks whether those relationships shift due to macro conditions, portfolio mix, or behavioural change. Together, they prevent silent risk accumulation.
- Fairness and Proxy Risk We continuously test for approval rate and performance deltas across cohorts, specifically looking for proxy features that appear neutral but behave like stand-ins for sensitive attributes. The objective is not just fairness optics, but long-term portfolio resilience.
These KPIs act as early-warning systems, allowing us to intervene before issues surface as regulatory findings, portfolio stress, or customer harm.
StartupTalky: Your products, like Algo360 and Kwik.ID, are central to digital lending and onboarding. How did Think360.ai navigate the regulatory and data privacy challenges associated with the Account Aggregator framework in 2025?
Amit Das: The Account Aggregator architecture fundamentally redefines control: the customer, not the institution, becomes the gatekeeper.
Our design principle has been to treat consent as executable infrastructure, not static compliance. Every workflow is built around explicit purpose limitation, time-bound access, and end-to-end traceability. Even as other operators have cut corners on trust, citing friction in UX as issue, we have stuck to the line of being right by the end customers. We believe that in the long term, trust will be the biggest moat for companies like us.
On the onboarding side, products like Kwik.ID generates audit-ready verification trails, including time-stamped artefacts and, where required, geo-backed proof. This allows institutions to demonstrate process integrity rather than rely on narrative explanations during audits.
For AA-enabled cash-flow underwriting, our systems preserve a full access trail: what data was accessed, under which consent, for what purpose, and how it influenced the final decision. This enables lenders to adopt cash-flow intelligence without introducing unquantified regulatory or reputational risk.
StartupTalky: The study linking gaming borrowers to higher credit risk was notable. How does Think360.ai balance the use of alternative data sources for financial inclusion with the need to avoid algorithmic bias and ensure fair lending practices?
Amit Das: One of our studies showed that nearly 20% of borrowers engage in real-money gaming. The insight itself was not the point; how it is used is what matters. How do you work with such customer segments where a family shares a device that is being used for all financial, personal, and leisure activities?
At Think360.ai, alternative data is treated as a contextual risk lens, not a deterministic label. Every such signal must clear three gates before entering production:
- demonstrable incremental predictive value,
- outcome validation over time,
- and fairness testing to ensure it does not act as a proxy for sensitive attributes.
Behavioural signals add nuance, not verdicts. If we cannot explain to a regulator, a risk committee, or a borrower why a signal mattered and how much it mattered, it does not belong in a live system.
Inclusion cannot be built on opaque logic. It must be built on explainable, monitorable intelligence that expands access without embedding hidden bias.
StartupTalky: You are expanding globally beyond India. What is the strategic focus of this international expansion, and how do you adapt India-centric products like Algo360 to new regulatory environments?
Amit Das: We are considering expansion into markets that share structural characteristics with India: thin-file populations, fragmented data systems, and rapidly evolving consent-led regulation. We focus on the sources of data (that Algo360 has built connectors around), and the interpretation as well as the inference layer. Algo360 has been built with its own SLM design long before these conversations became mainstream.
Algo360 is designed as a configurable decision framework, not an “India model.” Core principles remain invariant consent-first data usage, explainability, and auditability while execution is localised through:
- jurisdiction-specific data connectors,
- locally calibrated risk thresholds,
- institution-defined policy rules,
- and embedded compliance logic.
This allows lenders to meet local regulatory expectations without rebuilding core intelligence, while preserving consistent governance across geographies.
StartupTalky: Looking ahead to 2026, what is Think360.ai’s biggest product or market bet?
Amit Das: Our largest bet is that DPDP readiness will become a competitive differentiator, not a compliance tax. Trust is the moat that every brand and organization will aspire for.
As DPDP moves from principle to enforcement, institutions will be evaluated on whether every data-driven decision is backed by valid consent, clear purpose, and defensible evidence.
We are scaling this through ConsenPro, our DPDP-native consent and data-rights platform. It enables transparent notices, explicit consent capture, easy withdrawal, and secure, audit-ready consent logs that show exactly what happened downstream.
In parallel, Algo360 is being strengthened to ensure end-to-end decision defensibility, linking outcomes back to precise inputs, consent artefacts, and purpose codes, in language that non-technical stakeholders can understand.
StartupTalky: Five years from now, what do you hope will be the lasting legacy of Think360.ai on the global fintech ecosystem?
Amit Das: When you think of AI, you should Think360.AI. Think360.ai’s work is grounded in the belief that scalable AI is thoughtful AI. Our work, such as AI-driven credit, can scale responsibly, expanding access, without compromising transparency, fairness, or trust. Or, burning a hole through your pockets without delivering value. The broader shift I hope we contribute to is moving the industry away from black-box automation toward accountable decision intelligence, where institutions can innovate faster precisely because decisions remain reviewable, challengeable, and trusted. We want to expand how much (data) is being looked at, how (quantitatively, algorithmically, architecturally) it is being looked at, and whether we can believe in it (trust, unbiased). We also believe that current and future Thinkers will go on to drive this philosophy across industries.
StartupTalky: What is the single most important, hard-won lesson you would share with a leader scaling a B2B AI company handling sensitive financial data?
Amit Das: Treat trust and governance as a product capability and balance-sheet asset, not a compliance afterthought.
Shortcuts in financial AI rarely fail immediately. When they fail, they do so publicly and expensively through regulatory action, audit findings, or customer harm or reputational loss.
The durable path is disciplined foundations: data minimisation, lineage, access control, and explainability. When these are built early, trust compounds with regulators and clients, and model evolution becomes easier over time.
Speed can be recovered. Credibility, once lost, almost never is.
Explore more Recap'25 interviews here.
Must have tools for startups - Recommended by StartupTalky
- Convert Visitors into Leads- SeizeLead
- Website Builder SquareSpace
- Run your business Smoothly Systeme.io
- Stock Images Shutterstock