How DHS is hustling to leverage—and contain—generative AI

May 09, 2024

How DHS is hustling to leverage—and contain—generative AI

Discovering subtle patterns in signal data through AI can mean the difference between preventing and suffering an attack.

BY Mark Sullivan

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

How the Department of Homeland Security uses AI

The Department of Homeland Security (DHS), which employs 260,000 people, is responsible for protecting the U.S. from everything from drug smuggling to election interference. The department is made up of smaller agencies, most of which are constantly collecting all kinds of threat data and signals. So, like U.S. defense and intelligence, the DHS hopes AI systems can make sense of all this (often messy) data. In fact, the White House has mandated it: The DHS is mentioned 37 times in President Biden’s executive order on AI. 

Discovering subtle patterns in signal data can mean the difference between preventing and suffering an attack. DHS Secretary Alejandro Mayorkas said at the RSA security conference in San Francisco this week that his department  has already used AI to find the unique signature of a vehicle loaded with fentanyl approaching the Southern border. When DHS agents had only an old photo of a child trafficking victim to go on, they turned to an AI system to generate an image of the child’s likely current appearance. 

While the DHS is learning how to use AI internally, it’s also charged with helping to mitigate the inherent risks of the new technology. During a small roundtable with journalists at the RSA conference Tuesday, Mayorkas said his agency is keen “to ensure that we are mindful of its potential to do harm in the hands of an adverse actor and to defend our critical infrastructure against its malicious use.” Without proper guardrails in place, a generative AI system could spit out detailed directions for building a nuclear device, or the recipe for a deadly bioagent. 

That’s why Mayorkas and his chief AI officer, Eric Hysen, also met with some of the biggest AI companies during their trip to the Bay Area. The DHS has organized an “AI Safety Task Force” that he says will eventually create a “national plan” for the safe development and application of AI systems in the country’s critical infrastructure. He says the task force also includes representatives from civil liberties and privacy groups, as well as from “critical infrastructure” companies such as Delta Airlines, Occidental Petroleum, and Northrop Grumman. 

I asked Mayorkas whether, after the first meeting of the group, he came away with the impression that the AI companies at the table had a full understanding of the DHS’s reasons for being concerned about the technology. “I think they understand it quite clearly,” Mayorkas said, suggesting that they too have a lot to gain. “One of our goals is to build public confidence in AI’s role in the operation of our critical infrastructure. . . . Right now there’s concern; there is distrust.” 

Google DeepMind may usher in a rich period of discovery in biology

Google’s DeepMind released this week a new version of its AlphaFold AI system, which can not only predict the structure of proteins, but also model how proteins interact with other cell structures, including DNA, RNA, and small molecules that are often used in drugs. This accelerates researchers’ work to model how a new drug might react with various receptor sites in the body.

While AlphaFold can predict hundreds of millions of structures and interactions within the cell, it’s still just a small subset of the universe of possible interactions that could be triggered when, for example, molecules in a new drug design are introduced in the cell. “Even if you just think about the small molecule drug space, the number of [possible] designs is something like 10 to the sixtieth, which is just so outrageously large you can’t even comprehend it,” said DeepMind researcher Josh Abramson on a call with reporters Tuesday. Using AlphaFold might be something like carrying a toy flashlight into a pitch-black Superdome. Sounds daunting, but it’s out in that big dark space where scientists may find an AIDS vaccine or a cure for cancer. AI (coupled with quality training data and massive compute power), might be science’s best hope of navigating toward recipes for drugs so complex, or novel, that they’re beyond the reach of human intelligence alone. 

And we’re just learning how to use the AI effectively. Benchmark tests show that AlphaFold 3 can’t yet predict with certainty structures and interactions within the known universe already revealed in lab testing. 

Things may get very exciting when future models develop levels of intuition that allow them to predict and design drug functions that are outside the realm of their training data. “I sometimes say hallucination is a feature, not a bug, in biology,” says Anna Marie Wagner, who leads AI at the Boston biotech Ginkgo Bioworks. “I would probably take a model that’s a little bit less accurate at canonical benchmarks but helps me point lab experimentation into a much more interesting direction than a model that’s just going to be a little bit faster at getting to the already well-known answer.”

How DHS is hustling to leverage—and contain—generative AI

She describes a concept called “active learning,” in which the AI model is allowed to generate its own designs and then ask for specific training data it needs to better understand the biological effects or implications of the design, and to guide the design process forward. “There’s a real advantage when you can unleash the model to tell you what it wants to learn to expand its range of applicability as quickly as possible,” she says. “If you can generate that [from physical lab testing] and feed it back into the model right away, suddenly that model can become much more accurate and predictive in novel spaces.”

Is OpenAI going to release a search engine or not? 

This week, Bloomberg reported that OpenAI was getting into the internet search game to take on Google. (The Information reported the same thing back in February.). It’s quite possible the rumors are true. There is demand for a new way of searching the web: Gartner predicts that traditional search volume will drop 25% by 2026, mainly due to chatbots and other virtual agents. 

It was the arrival of ChatGPT in late 2022 that got a lot of people thinking that having a conversational back-and-forth with a chatbot might be a better way of getting information off the web than typing keywords into Google and wading through an ad-cluttered list of links. An OpenAI search tool would likely look a lot like ChatGPT, but with the added ability to check its prompts and answers against a web index. 

If OpenAI is indeed making a move into search, it will not be uncontested. Perplexity has built an impressive AI-native search tool, and AI search is the sole focus of the quickly growing and well-funded company. “Expect this to be ‘game on,’” Perplexity CEO Aravind Srinivas told me in response to the Bloomberg story. 

Google is best-positioned to offer AI search, for better or worse: The tech giant already has an experimental LLM-powered Search Generative Experience (SGE) that users can try, but it’s not yet part of the main search tool. A move to this type of interface would cannibalize Google’s traditional keyword-based search ads business. Google has only just begun to experiment with ways of monetizing SGE. 

 

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld 


Fast Company

(6)