In 2021, the Maryland Department of Health and the State Police faced a crisis: fatal drug overdose in the state were at their highest point, and the authorities did not know why.
In search of answers, Maryland officials resorted to scientists from the National Institute of Standards and Technology, the National Institute of Metrology for the United States, which defines and maintains essential measuring standards for a wide range of industrial sectors and health and safety applications.
There, a research chemist called Ed Sisco and his team had developed methods to detect small amounts of drugs, explosives and other hazardous materials, techniques that could protect the officials responsible for enforcing the law and others who had to collect these samples. And a pilot discovered new and critical information almost immediately. Read the complete story.
“Adam Bluestein.”
This story is from the next edition of our printed magazine. Subscribe now To read it and get a copy of the magazine when it lands!
Phase two of the military has arrived
“James O’Donnell.”
Last week, I spoke with two American marines who spent much of last year deployed in the Pacific, carrying out training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this implementation was unique: for the first time, they were using generative to explore intelligence, through a chatbot interface similar to Chatgpt.
As I wrote in my new story, this experiment is the last evidence of the impulse of the pentagon to use the generative AI, the toolas that can participate in a human conversation, through their ranks, for tasks that include surveillance. This thrust increases the alarms of some AI security experts on whether large language models are suitable for analyzing subtle intelligence pieces in situations with high geopolitical bets.