Harrison.Ai CEO Says Radiologists Need Evidence, Not AI Hype
By Arunima Rajan
In an interview with Arunima Rajan, Harrison.ai CEO and Co-founder Dr Aengus Tran says the company customised a draft reporting product for Manipal Hospitals after clinicians flagged the time spent writing long radiology reports as a key workflow challenge.
Most AI companies in healthcare started by picking one narrow problem, getting really good at it, and then slowly expanding. You did the opposite. You trained a single model to read over a hundred findings on a chest X-ray right from the start. Walk me through that early decision. Was there a moment when someone on the team said this is too ambitious, and what made you push forward anyway?
There were many moments. And it's not wrong, it's just not what we believed would actually solve the problem. The insight came from thinking about how radiology actually works. A radiologist doesn't look at a chest X-ray and think "I'm only checking for pneumonia today." They're scanning the entire image, lungs, heart, bones, soft tissue all simultaneously.
If we built an AI that only flagged one or two findings, we wouldn't have built a second pair of eyes. We would have been building a very expensive highlighter or a spell checker for one thing, while the radiologist still had to do all the cognitive work for everything else. That's not a workflow improvement. That's a workflow interruption.
So the question became: what would it actually take to build something comprehensive? And the answer was uncomfortable, it would take an enormous amount of data, annotated at a level of quality that didn't really exist yet, and it would take time. The narrow approach made sense for companies trying to get to a demo quickly. We were trying to build something that would still be relevant in 20 years.
You had around 145 consultant radiologists manually annotating nearly 800,000 chest X-ray studies to train this model. That is an extraordinary amount of human labor to pour into a machine learning product. Most AI founders I talk to are obsessed with reducing human involvement. You seem to have leaned into it. Why?
We're building a system that we want to reach the benchmark of a consultant radiologist, who has spent over a decade training their eyes. If you want to teach a model to see what a consultant radiologist sees, you need to show it what a consultant radiologist sees. We know inter-reader variability is real in radiology. The quality of the label is the ceiling of the model's performance.
A lot of early medical AI was trained on radiology reports, the text that radiologists write after reading a scan. That's a proxy. Reports are written for clinical communication, not for training AI. They're incomplete, inconsistent, and they don't capture the spatial reasoning that makes radiology what it is. We needed radiologists to look at images and mark findings directly. That's a fundamentally different and much more expensive data collection process.
The irony is that this investment in human expertise is what makes the AI trustworthy to humans. When a radiologist asks, "how was this trained?" and we can say "by 145+ board-certified radiologists, triple-labeled, at this scale", that's a conversation that builds confidence.
Here is something I find fascinating about your market. You are selling a product to radiologists that essentially tells them they might be missing things. That is a delicate message to deliver to a highly trained specialist. How did you figure out the right way to position that, and did you get it wrong before you got it right?
The shift happened when we started listening more carefully to what radiologists actually worried about. It was "I'm worried about the volume. I'm worried about the pace. I'm worried about the conditions I'm working in."
India has 1 radiologist for every 100,000 people. The UK is facing a 40% radiologist shortage. Australia is facing backlog issues. This is a global problem. You're not missing things because you're not skilled, you're missing things because you're exhausted and don’t have the capacity to accommodate all the scans.
Every radiologist already knows the value of a colleague double-checking a difficult case. We're making that colleague available on every single read, at 3am, in a rural hospital in Tenkasi with one radiologist on call. That's not a threat to expertise. That's an extension of it.
We are also transparent about what our AI doesn't do well. Radiologists respect intellectual honesty. If you oversell and they find an edge case where the model underperforms, you've lost them. If you're upfront about limitations and show them the evidence, they engage as partners.
You are now in over a thousand sites across multiple countries, each with different regulatory regimes, different clinical workflows, different levels of infrastructure. When you are sitting in a hospital in rural India versus a tertiary care center in Australia, is it even the same product doing the same job, or does the context change what the AI means to the people using it?
It's the same model doing the same job, but the context it operates in is radically different and we had to build for that from the start. The 125 findings our CXR model looks for on a chest X-ray are the same whether you're in Hong Kong or rural Queensland or Delhi.3 But the way it integrates, the way it surfaces findings, the way it fits into whatever workflow exists would be different. That being said we have had to create products for specific markets. A good example would be our draft reporting product that we developed and customised for Manipal Hospitals. We listened to clinicians and their major concern was time consumed in writing long reports, so we developed a product to make it easier for Indian radiologists.
The regulatory piece is complex. We have FDA 510(k) clearances in the US, CE Mark in Europe, ARTG in Australia, MHRA registered in the UK, CDCSO in India and clearances across 40+ countries.
When your AI catches something a radiologist missed and that finding changes a patient's outcome, who gets the credit? And more importantly, when something goes wrong, who carries the liability? How far along is that conversation really?
This is one of the most important conversations happening in medicine right now, and I want to be honest: it's not fully resolved. The legal and ethical frameworks are still catching up to the technology.
Our AI is a decision-support tool, it surfaces findings, it flags priorities, it provides a second read. But the radiologist reviews that output, applies their clinical judgment, and signs the report. The radiologist remains the responsible clinician.
What we are focused on is making sure the conversation about liability doesn't slow down adoption in ways that harm patients. The risk of not deploying AI, missed diagnoses, delayed treatment, is real and quantifiable. We need legal and regulatory frameworks to catch up, and we're actively engaged in those conversations.
A lot of your competitors raised huge rounds, made big promises, and then struggled to get past the pilot phase. You seem to have taken a quieter path to a thousand sites. What did you understand about hospital procurement and clinical adoption that the louder players in the space got wrong?
Many AI radiology companies never scaled beyond pilot programs, not because the technology failed, but because they underestimated the human and institutional complexity of clinical adoption.
Our products are built by clinicians, for clinicians. Involving clinical expertise from the outset ensures a more intuitive user experience and ultimately drives adoption. Hospitals don’t buy technology, they adopt workflows. A pilot that demonstrates strong accuracy in a controlled setting doesn’t automatically translate into real-world clinical use.
Another common misconception is around who the true customer is. Many AI companies focused on selling to administrators and IT teams, the stakeholders who can approve budgets and sign contracts. But actual adoption is determined by clinicians. If radiologists don’t trust the tool, don’t find it useful, or feel it adds friction to their workflow, the product simply won’t be used, regardless of the contract.
Finally, our award-winning Viewer UX/UI is designed to present clinical findings clearly and intuitively. With minimal clicks and a streamlined interface, it accelerates workflows addressing a key pain point seen in many other solutions on the market.
When you look at the broader imaging landscape, where do you see the same kind of gap between what AI can do and what clinicians actually need, and how do you decide when the technology is ready versus when you are just chasing a bigger market?
CT Brain was the natural next step. Non-contrast brain CT is one of the highest stakes read in emergency medicine, strokes, bleeds and trauma. The time-sensitivity is extreme. A patient presenting with stroke symptoms needs a read in minutes, not hours. And yet the same shortage problem that affects chest X-ray affects neuroradiology. We built a model covering 130 findings on non-contrast CT brain scans.
Our CT Chest solution is our latest solution and is designed to work across both contrast and non-contrast scans. It can detect over 200 features including those relevant for lung cancer screening. This is where I believe the impact can be especially significant. Lung cancer remains the leading cause of cancer-related deaths globally, and the evidence supporting low-dose CT screening is strong.
If you zoom out ten years from now, what does a radiology department look like in a world where AI like yours is standard? Are we talking about fewer radiologists doing different work, or more scans being read with the same number of people, or something that none of us are imagining yet?
I would think more radiologists doing more important work, reading more scans than we can currently imagine and reaching patients who today have no access to specialist diagnosis at all.
The second thing I'd say is that the nature of the work changes. Right now, a significant portion of a radiologist's day is spent on routine reads, normal studies, common findings and straightforward cases. What AI doesn't handle well is the genuinely ambiguous case, the rare presentation, the clinical context that requires a conversation with the referring physician, the judgment call that requires experience and intuition. If AI absorbs the routine, radiologists get to spend more of their time on the cases that actually need them.
Got a story that Healthcare Executive should dig into? Shoot it over to arunima.rajan@hosmac.com—no PR fluff, just solid leads.