Logan Skrzypek ‘27, Student Life Editor
On Aug. 12, 2024, the Yale School of Medicine team in New Haven, Connecticut discovered that an artificial intelligence (AI) model was capable of diagnosing an individual with Marfan syndrome, just by scanning a simple facial recognition of a photograph taken by a user of that AI service, as of Aug. 12. According to Google, “Marfan syndrome is an inherited disorder that affects connective tissue — the fibers that support and anchor organs and other structures in the body. It affects the heart, eyes, blood vessels and bones.” In a recent pilot study conducted by researchers at the Yale School of Medicine, over 672 facial photographs of people with and without Marfan syndrome were used by an AI device to detect the disease accurately throughout each individual. The model successfully distinguished between Marfan and non-Marfan faces with 98.5 percent accuracy.
But could actual doctors diagnose an individual with Marfan syndrome without the use of AI technology? Freshman Meredith Gateman said, “AI can pick up data much faster than a real doctor can, and it can examine a person’s face from a much smarter perspective. It’s definitely beneficial, and I wouldn’t see doctors do any better than what AI is capable of doing right now.” All of the data the AI uses to generate accurate responses are gathered from a collection of different resources online. It is like searching for a resource to use, but in a more efficient way. With this in mind, it’s likely to be expected that by 2025, 90 percent of resources found online will be AI-generated. Although most doctors and individuals in the medical field receive their knowledge from professors and medical schools, newer doctors in the upcoming years could be using AI instead because of its efficiency to accurately diagnose patients with diseases including Marfan syndrome.
A similar study was conducted on Sep. 4 by scientists at Harvard Medical School located in Boston, Massachusetts. A Chat-GPT-like AI model was deemed capable of performing various diagnostic tests to confirm whether or not an individual had symptoms of multiple forms of cancer within their body. The AI model, which works by reading digital slides of tumor tissues, detects cancer cells and predicts a tumor’s molecular profile based on cellular features seen on the image with superior accuracy to more current AI systems. It can determine patient survival across multiple cancer types, and accurately pinpoint features in the tissue that surrounds a tumor, relating to a patient’s response to standard treatments including surgery, chemotherapy, radiation, and immunotherapy. The model was able to accurately identify tell-tale patterns on images related to tumor aggressiveness and patient survival. How could this be revolutionary for finding more information on how to treat patients with cancer? Sophomore Grace Watson stated, “I think this could be revolutionary to finding more information on how to treat patients with cancer because AI knows everything. If it knows all the information from a bunch of resources across the web, it can obviously detect diseases for all types of diseases.” AI can reuse and reconsider information from so many resources online, actually making it one of the most used “search engines” since the launch of Google in 1998. It’s likely that AI will take over the entirety of medical fields found across the world. From various resources, pictures, and factual online data, AI is capable of diagnosing patients with extreme diseases including cancer, which can be used to define life-changing cures and solutions to various physical issues many people have. But now, the real question is, can we actually trust our doctors with using and learning from AI as it continues to expand and take over the medical field?
