The Power and Pitfalls of AI for U.S. Intelligence

In one example of the IC’s successful use of artificial intelligence, after exhausting all other avenues (from human spying to signals intelligence), the US was able to locate an unidentified bus in an Asian power by locating a bus going to and from it of the WMD research and development facility and other known facilities. To do this, analysts use algorithms to search and evaluate nearly every square inch of imagery in the country, according to a senior U.S. intelligence official who spoke on condition of anonymity.

While AI can compute, retrieve and use programming that performs bounded rational analysis, it lacks the calculus that properly dissects the more emotional or unconscious components of human intelligence, which psychologists describe as systems one thinking.

For example, AI could draft intelligence reports similar to newspaper articles about baseball, with structured illogical flows and repetitive content elements. However, AI was found to be lacking when briefs required complex reasoning or logical arguments to justify or justify conclusions. When the intelligence community tested the capability, the product looked like an intelligence briefing but was otherwise absurd, intelligence officials said.

Such algorithmic processes can overlap, increasing the complexity of computational reasoning, but even then these algorithms cannot interpret context as well as humans, especially when it comes to language, such as hate speech.

The AI’s understanding may be more similar to that of a human toddler, said Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to customers from violence to disinformation. “For example, AI can understand the basics of human language, but the underlying models don’t have the underlying or contextual knowledge to accomplish a specific task,” Curwin said.

“From an analytics standpoint, it’s hard for AI to interpret intent,” Curwin added. “Computer science is a valuable and important field, but social computational scientists have made giant leaps in enabling machines to explain, understand, and predict behavior.”

To “build models that can begin to replace human intuition or cognition,” Curwin explained, “researchers must first understand how to interpret behavior and translate that behavior into something that AI can learn.”

While machine learning and big data analytics provide predictive analytics about what might or might happen, it cannot explain to analysts how or why it came to those conclusions.This opaque Difficulties in AI reasoning and reviewing sources, consisting of very large datasets, may affect the actual or perceived plausibility and transparency of these conclusions.

Transparency of reasoning and provenance is Analyze industry standards Products produced by and for the Intelligence Community.Analytical objectivity is also statutory requirementssparking calls from within the U.S. government renew Given the growing popularity of artificial intelligence, such standards and laws.

Some intelligence practitioners also believe that machine learning and algorithms are more art than science when used to make predictive judgments.That is, they are prone to bias, noise, and can be accompanied by unsound methods and lead to errors similar to those found in criminal offenses Forensic Science and Art.

Source link