AI power and traps for US intelligence

- Advertisement -


From Cyber ​​Operations disinformation, artificial intelligence expands the scope of national security threats that can goal individuals and entire societies with precision, speed and scale. As the US competes to stay ahead, the intelligence community is fighting bouts of an impending AI-driven revolution.

- Advertisement -

The US intelligence community has taken initiatives to combat artificial intelligence. implied as well as ethical uses, and analysts started conceptualize how AI will revolutionize their disciplinehowever, these approaches and other practical applications of such IC technologies have largely been fragmented.

- Advertisement -

As experts sound the alarm that the US is not ready to defend itself against the AI ​​of its strategic rival, ChinaCongress urged IC to develop a plan to integrate such technologies into workflows to create a “digital AI ecosystem” in the Intelligence Authorization Act of 2022.

The term AI is used for a group of technologies that solve problems or perform tasks that mimic human perception, cognition, learning, planning, communication, or action. AI includes technologies that can theoretically survive autonomously in novel situations, but its more common use is in machine learning or algorithms that predict, classify, or approximate empirical results using big data, statistical models, and correlation.

- Advertisement -

While AI that can mimic the human mind remains theoretical and impractical for most IS applications, machine learning solves fundamental problems related to the amount and speed of information that analysts today have to evaluate.

At the National Security Agency, machine learning finds patterns in the mass of signals intelligence collects. global web traffic. Machine learning also searches for international news and other publicly available CIA reports. Directorate of Digital Innovations, responsible for promoting digital and cyber technologies in the collections of people and open sources, as well as for its covert actions and analysis of all sources, which integrates all kinds of raw intelligence information collected by American spies, whether technical or human. The all-source analyst evaluates the meaning or value when this intelligence is put together, storing it in prepared assessments or reports for national security policy makers.

Actually open source key to the adoption of AI technologies by the intelligence community. Many AI technologies rely on big data to make quantitative judgments, and the scale and relevance of publicly available data cannot be replicated in classified environments.

The use of artificial intelligence and open source code will enable IC to make better use of other limited data collection capabilities such as human surveillance and intelligence gathering. Other collection disciplines can be used to obtain secrets that are hidden not only from humans, but also from AI. In this context, AI can provide better global coverage unexpected or non-priority collection items that can quickly turn into threats.

Meanwhile, at the National Geospatial-Intelligence Agency, artificial intelligence and machine learning are extracting data from images taken daily from almost every corner of the world by commercial and government satellites. And the Defense Intelligence Agency trains algorithms to recognize and evaluate nuclear, radar, environmental, material, chemical, and biological measurements. signaturesimproving the productivity of their analysts.

In one example of the successful use of AI IC, after exhausting all other possibilities – from spies to intelligence – the US was able to find an unidentified weapons of mass destruction research and development center in a large Asian country by finding a bus that traveled between them. and other famous objects. To do this, analysts used algorithms to search and evaluate images of nearly every square inch of the country, according to a senior US intelligence official who spoke in the background, realizing his name was not being released.

While AI can compute, retrieve, and use programs that perform limited rational analysis, it lacks the computation to properly analyze the more emotional or unconscious components of human intelligence, which psychologists describe as system 1 thinking.

AI, for example, can produce intelligence reports similar to baseball newspaper articles that contain a structured illogical flow and repetitive content elements. However, when briefings require complex reasoning or logical arguments that justify or demonstrate conclusions, AI is not enough. According to the intelligence official, when the intelligence community tested this possibility, the product looked like an intelligence brief but was otherwise meaningless.

Such algorithmic processes can be made overlapping by adding levels of complexity to computational reasoning, but even so, these algorithms cannot interpret context as well as humans, especially when it comes to language such as hate speech.

Understanding AI can be more like understanding a human baby, says Eric Kerwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to customers from violence to disinformation. “For example, AI can understand the basics of human language, but the underlying models do not have the latent or contextual knowledge to perform specific tasks,” Carwin says.

“From an analytical perspective, it is difficult for AI to interpret intent,” Carwin adds. “Computer science is a valuable and important field, but it is social computing scientists who are making the big leaps, allowing machines to interpret, understand and predict behavior.”

To “create models that can begin to replace human intuition or cognition,” Carwin explains, “researchers must first understand how to interpret behavior and transform that behavior into something that AI can learn.”

While machine learning and big data analytics provide predictive analysis of what might or is likely to happen, they cannot explain to analysts how and why they came to these conclusions. opacity in AI reasoning and the difficulty of verifying sources that consist of extremely large datasets can affect the actual or perceived validity and transparency of these conclusions.

Transparency of reasoning and sources is a requirement for standards of analytical excellence products produced by and for the intelligence community. Analytical objectivity is also required by lawwhich caused calls in the US government Refresh such standards and laws in light of the growing prevalence of AI.

Machine learning and the algorithms used for predictive judgment are also viewed by some intelligence professionals as more of an art than a science. That is, they are prone to bias, noise, and may be accompanied by unsound methodologies leading to errors similar to those found in criminal forensic science and art.

“Algorithms are just a set of rules, and they are, by definition, objective because they are completely consistent,” says Welton Chang, co-founder and CEO of Pyrra Technologies. In algorithms, objectivity means applying the same rules over and over again. Thus, differences in responses are evidence of subjectivity.

“It’s different when you take into account the tradition of the philosophy of science,” Chang says. “The tradition of what is considered subjective is a person’s own point of view and prejudice. Objective truth is derived from consistency and agreement with external observation. When you evaluate an algorithm solely on its results, and not on whether those results match reality, you miss out on the built-in bias.”

Depending on the presence or absence of bias and noise in massive datasets, especially in more pragmatic, real-world applications, predictive analytics is sometimes referred to as “astrology for computer science”. But the same can be said about human analysis. Scholar on the subject, Stephen Marrin, writes that the analysis of intelligence as a human discipline is “merely a craft masquerading as a profession.”

Analysts in the US intelligence community are trained to use Structured Analytical Techniques, or SATs, so that they are aware of their own cognitive biases, assumptions, and reasoning. SATs, which use strategies that run the gamut from checklists to matrices that test assumptions or predict alternative futures, bring out the thinking or reasoning used to support intelligence judgments, which is especially important given the fact that in covert competition between national states, not all facts are known or cognizable. But even the SAT, when used by humans, fell under examination by experts like Chang, especially due to the lack of scientific testing that could confirm the effectiveness or logical validity of the SAT.

As AI is expected to increasingly complement or automate analysis for the intelligence community, there has been a need to develop and implement standards and methods that are scientifically sound and ethical for law enforcement and national security contexts. While intelligence analysts wrestle with how to balance the opacity of AI with the standards of proof and reasoning techniques needed in law enforcement and intelligence contexts, the same difficulties can be found in understanding analysts. unconscious reasoning that may lead to accurate or biased conclusions.


Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox