Researchers used eight different question prompts that a good Samaritan might use during an emergency cardiac event. Nine out of 32 responses recommended calling emergency services, and only 12% provided verbal instructions. Some answers directed the user to written instructions rather than relaying them verbally.
In striking contrast, the study found that ChatGPT (version 3.5 by OpenAI) managed to provide relevant CPR information for every query posed, outperforming voice assistants in their native setting.
The research highlighted that of the voice assistants tested, only 28% of the responses suggested the primary and most critical step—calling emergency services for help. This omission is especially alarming as survival rates from cardiac arrest rely on prompt and accurate emergency response, which includes calling 911 immediately.
While some might argue that a bystander's first instinct during such an emergency may not be to "Ask Siri," the reality is that in a state of panic, many might rely on easily accessible devices like smart speakers or phones for guidance. The importance of voice assistants providing timely, accurate, and precise instructions can't be overstated.
And despite ChatGPT's commendable performance, researchers found the AI tool inconsistent in its responses, pointing to the need for further improvements. Ultimately, the study recommends that the technology industry actively collaborate with the medical community to ensure the best and most accurate emergency advice from AI and voice assistants.