NASA's AI Doctor May Be The Key To Surviving A Mars Mission
In NASA's current missions, medical risks are manageable. There's the opportunity for emergency evacuations, real-time communications, and the ability to send samples to Earth. However, long-term future missions, such as going to Mars, will take away that safety net. There are no evacuations, there is no ability to get new supplies, and there is no easy way to communicate with Earth.
NASA is researching the idea of using artificial intelligence (AI) to create a "doctor" for these missions. Part of this initiative includes a Crew Medical Officer (CMO) Digital Assistant to help guide decisions, suggest evaluations, and recommend treatments. The crew may also wear devices that monitor their physical health, with the CMO Digital Assistant using the data.
This raises the question of whether this is the right call on inherently dangerous Mars missions. NASA is focused on keeping humans involved in all stages, and early tests have been promising. However, AI in health care has been met with concerns, including the possibility of inherent biases and data security risks, as well as the lack of clear accountability for AI decisions.
NASA's AI health care project
AI is certainly not new in health care, and there's even an AI assistant that can tell if you're sick by looking at your tongue. Even with all the emerging technology, NASA emphasizes the need for humans to still be involved. This project works in tandem with the Space Medicine Operations Division. The purpose is for the AI to provide recommendations for the human crew medical officer and the human ground flight surgeons. It would ask about the patient's symptoms, account for any relevant medical history, and guide crew members in any physical exams needed for further diagnosis. It could then offer medical suggestions.
NASA has laid out its Trustworthy AI Principles to guide this project. They include ensuring the AI isn't biased, that it is scientifically accurate, that it must protect patient privacy, and that it must not harm humans. NASA crafted its own Large Language Model (LLM) from open-source models that runs in Google Cloud's Vertex AI Services to accomplish these project objectives.
The AI showed promise in early patient tests. There was a 74% accuracy in flank pain evaluations, an 80% accuracy in ear pain evaluations, and an 88% accuracy in ankle injury evaluations. There are further tests planned for the future, which encompass trying out other medical scenarios as well as medical imaging.
Using AI in medical applications may come with risks
AI has integrated itself into all facets of medicine, and people still like to use ChatGPT for health care despite warnings. NASA is up front about the challenges faced with this project. It is expensive and time-consuming. There are also trust issues with using AI in general, let alone for potentially life-or-death medical emergencies.
The Journal of Medical Internet Research (JMIR) published an article in November 2024 titled "Benefits and Risks of AI in Health Care". While the study pointed out the promise AI holds in health care, it also highlighted the risks of ingrained biases from the AI's training and the risk of patient data being compromised. It also pointed out the problem with decision-making transparency. For a NASA Mars mission, if the CMO Digital Assistant recommends something that harms or kills a crew member, who is to blame? The AI? The human who followed the advice? The team that developed the AI?
There has been a historical problem in medicine with clinical trials, research, and data focusing on patients who are white and male, putting the health of women and minorities at risk. That also brings up the concern of this same data and research being integrated into AI. It could lead to an incorrect diagnosis or inadequate treatment. When you are on a mission to Mars and have no help other than what has been brought with you, that might be an extra risk you don't want to take.