Facial recognition software could jeopardize patient confidentiality standards when used to analyze MRI images, according to a new study conducted by the Mayo Clinic.

Mayo Clinic researchers used MRI scans from 84 volunteers who had undergone head scans. The volunteers were photographed from five different angles, and the researchers also generated an image of each face from the MRIs, making sure to include the outline from the skin, fat, and the skull’s bone marrow but avoiding the bone or hair.

The researchers then used Microsoft Azure facial-recognition software to see if it could identify the faces in the MRI images. They were not surprised but concerned by the fact that the software was able to correctly match 70 of the 84 images to a photo of the individual. Microsoft issued no comment when asked about these findings.

Clinical research studies collect significant data on its participants. A person’s private information such as family medical history, illnesses, and genetic data is put at a heightened risk of exposure when artificial intelligence can accurately guess their identity based on an MRI image.

Although the capabilities of facial-recognition software are alarming, the researchers imagine that it will be harder for the software to make correct identity matches in a pool of thousands of random photos. However, as clinical studies continue to accumulate swaths of personal medical data, privacy risks will escalate in tandem.