MedQA is a 70-billion-parameter clinical LLM fine-tuned on AMD’s MI300X GPUs using ROCm 6.1. It matches or exceeds NVIDIA A100 performance on MedQA benchmarks, delivering 92% accuracy in differential diagnosis while cutting inference latency to 18 ms per token.
Overview
The MedQA project challenges the assumption that medical AI work requires NVIDIA GPUs. It uses the HuggingFace ecosystem, including Transformers, PEFT, TRL, and Accelerate, which works seamlessly on ROCm. The training pipeline runs on an AMD Instinct MI300X without CUDA dependencies.
What it does
MedQA is a LoRA fine-tuned clinical question-answering model that takes a multiple-choice medical question and returns both the correct answer letter and a clinical explanation of the reasoning. The model uses the Qwen3-1.7B base model, which has 1.7 billion parameters and supports trust_remote_code=True.
Tradeoffs
The project uses LoRA (Low-Rank Adaptation) via the PEFT library, which injects small trainable rank-decomposition matrices into the attention layers, leaving the base weights frozen. This approach keeps memory usage low and training fast. The model is trained with a batch size of 4 and effective batch size of 16, using a cosine LR schedule with warmup.
The results show that MedQA achieves 92% accuracy on the MedMCQA dataset, with a training time of approximately 5 minutes on the MI300X. The model has ~2.2 million trainable parameters, which is 0.15% of the total parameters.
The project also highlights the advantages of using AMD ROCm, including the ability to train without CUDA dependencies and the availability of 192 GB HBM3 memory on the MI300X. This removes the need for 4-bit quantization and allows for cleaner training with no quantization artifacts.
When to use it
MedQA can be used for medical question answering, and its ability to provide explanations for its answers makes it clinically useful. The project demonstrates that building a capable, explainable medical AI on open-source AMD hardware is possible and straightforward.
The next steps for the project include scaling and hardening the pipeline, including training on a larger dataset, adding confidence scoring, and integrating RAG (Retrieval-Augmented Generation) for real-time medical literature retrieval.
In conclusion, MedQA shows that the HuggingFace ecosystem's ROCm compatibility is genuinely good, and the MI300X's memory headroom removes an entire category of engineering problems. LoRA makes fine-tuning a 1.7B model a 5-minute job, making it an attractive option for medical AI applications.
Practical takeaway: MedQA demonstrates the feasibility of building clinical AI models on AMD ROCm, offering a promising alternative to NVIDIA-dominated medical AI training. By leveraging the HuggingFace ecosystem and LoRA fine-tuning, developers can create capable and explainable medical AI models with reduced training times and costs.