Source: U.S Intel
Approved for release 2003/09/10 : CIA-RDP96-00788R001700210016-5
DEPARTMENT OF THE ARMY
US Army Operational Group US Army Intelligence and Security Command Fort George G. Meade, Maryland 20755
By Kyle Mizokami
– Air Force pilots in the not-so-distant future could fly and fight together like Luke Skywalker and R2-D2.
– One of the Air Force’s top officials is confident the service’s secret new fighter jet will have an artificial intelligence copilot.
– The AI could take over key tasks, flying and fighting the plane, to prevent the human pilot from being overwhelmed.
The U.S. Air Force’s secret new fighter jet, which it designed, built, and tested in just one year, will feature some kind of artificial intelligence copilot—a trusted computer algorithm that human pilots can rely on to assume critical tasks in the air.
That’s according to Will Roper, the Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics, who in September shocked the world when he revealed the surprise existence of the service’s new, mysterious Next Generation Air Dominance (NGAD) fighter.
The Air Force has been incredibly tight-lipped about the sixth-generation fighter, only confirming it exists, and it’s flying … somewhere. But a few clues about NGAD have trickled out since the initial announcement, such as which defense contractor likely built the plane. And now, Roper has revealed (via Breaking Defense) that the NGAD will have an “AI-assisted copilot, maybe even ARTUµ.”
That’s the call sign—a.k.a. R2—that Roper and his team used to train the world-leading computer program µZero to operate a U-2 spy plane last week in California, marking the first time AI has controlled a U.S. military system.
In that groundbreaking experiment, the “crew” took part in an exercise centered around a simulated missile attack. The U-2 was assigned to locate enemy missile launchers on the ground. The human pilot kept a lookout for enemy aircraft, while the AI took over tactical navigation and sensors to search for the launch vehicles.
Modern aerial warfare—even the act of flying the airplane—is growing increasingly complicated. Pilots must master interfaces, procedures, and individual sensors and weapon systems.
In addition to monitoring traditional things like altitude, speed, fuel state, and other factors in flight, a fifth-generation fighter pilot must also keep a watchful eye on a host of sensors, from the human eyeball to infrared sensors, threat warning systems, and radar. Once combat commences, flying becomes exponentially more complicated, as pilots must account for enemy air and surface-to-air capabilities, strengths, and weaknesses—while still flying the plane.
An AI-assisted copilot could take on relatively simple tasks, such as communications, monitoring for threats, network security, and navigation. ARTUµ, which took over and tied together a U-2’s navigation and sensors to search for missile launchers, is seemingly on the more advanced end of the AI cockpit buddy spectrum.
Meanwhile, a human pilot could concentrate on tasks reserved for humans, such as flying the plane, authorizing weapons releases, approving changes to flight plans, and communicating with other humans at home base, in the air, and on the ground. The AI could even free the pilot to use the most important tool available to the human mind—an imagination—to look at a developing situation and turn it to his or her advantage.
Roper revealed his plans to a recent Defense Writer’s Group meeting, via Breaking Defense:
“What I expect will happen in the pilot, copilot role—the Luke Skywalker, R2-D2 role—is that pilots will gain an instinct, just like they have an instinct for stealth today, about when their AI crew pilot is performing well, or could perform well, and will turn over more of the reins to it. And [the pilot] will have a similar instinct of when it won’t be performing well, and will pull the reins back to the human.”
Source: Popular Mechanics
Transformation in the focus of US defense artificial intelligence efforts. The Pentagon’s artificial intelligence hub is shifting its focus to enabling joint warfighting operations, developing artificial intelligence tools that will be integrated into the Department of Defense’s Joint All-Domain Command and Control (JAIC) efforts.
JAIC is charged with accelerating AI adoption across the Department of Defense.
Nand Mulchandani, acting director of the Joint Artificial Intelligence Center, said: “The AI capabilities JAIC is developing as part of the joint warfighting operations mission initiative will use mature AI technology to create a decisive advantage for the American war fighter.”
That marks a significant change from where JAIC stood more than a yer ago, when the organization was still being stood up with a focus on using AI for efforts like predictive maintenance. That transformation appears to be driven by the DoD’s focus on developing JADC2, a system of systems approach that will connect sensors to shooters in near-real time, according to c4isrnet.com.
“JADC2 is not a single product. It is a collection of platforms that get stitched together — woven together ― into effectively a platform. One example of the organization’s joint warfighting work is the fire support cognitive system, an effort JAIC was pursuing in partnership with the Marine Corps Warfighting Lab and the U.S. Army’s Program Executive Office Command, Control and Communications-Tactical. That system, Mulchandani said, will manage and triage all incoming communications in support of JADC2.
“We do have a project going on under joint warfighting which we are going to be actually go into testing,” he said.
By Alison DeNisco Rayome
The reality is that US and China efforts to develop AI are entwined, even if the tensions of coronavirus and trade disagreements may spur a separation.
— which encompasses everything from to to your speaker — is a fast-growing field that is increasingly playing a more critical role in many aspects of our lives. A country’s AI prowess has major implications for how its citizens live and work — and its economic and military strength moving into the future.
With so much at stake, the narrative of an AI “arms race” between the US and China has been brewing for years. Dramatic headlines suggest that China is poised to take the lead in AI research and use, due to its national plan for AI domination and the billions of dollars the government has invested in the field, compared with the US’ focus on private-sector development.
But the reality is that at least until the past year or so, the two nations have been largely interdependent when it comes to this technology. It’s an area that has drawn attention and investment from major tech heavy hitters on both sides of the Pacific, including Apple, Google and Facebook in the US and SenseTime, Megvii and YITU Technology in China.
“Narratives of an ‘arms race’ are overblown and poor analogies for what is actually going on in the AI space,” said Jeffrey Ding, the China lead for the Center for the Governance of AI at the University of Oxford’s Future of Humanity Institute. When you look at factors like research, talent and company alliances, you’ll find that the US and Chinese AI ecosystems are still very entwined, Ding added.
But the combination of political tensions and the rapid spread ofthroughout both nations is fueling more of a separation, which will have implications for both advances in the technology and the world’s power dynamics for years to come.
“These new technologies will be game-changers in the next three to five years,” said Georg Stieler, managing director of Stieler Enterprise Management Consulting China. “The people who built them and control them will also control parts of the world. You cannot ignore it.”
The first was in March 2016, when AlphaGo — a machine-learning system built by Google’s DeepMind that uses algorithms and reinforcement learning to train on massive datasets and predict outcomes — beat the human Go world champion Lee Sedol. This was broadcast throughout China and sparked a lot of interest — both highlighting how quickly the technology was advancing, and suggesting that because Go involves war-like strategies and tactics, AI could potentially be useful for decision-making around warfare.
The second moment came seven months later, when President Barack Obama’s administration preparing for a future with AI, laying out a national strategic plan and describing the potential economic impacts (all PDFs). Some Chinese policymakers took those reports as a sign that the US was further ahead in its AI strategy than expected.on
This culminated in July 2017, when the Chinese government under President Xi Jinping released a development plan for the nation to become the world leader in AI by 2030, including investing billions of dollars in AI startups and research parks.
Read full text: CNET
MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.
The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.
Their results, published on June 8, 2020, in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.
“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”
Memristors, or memory transistors, are an essential element in neuromorphic computing. In a neuromorphic device, a memristor would serve as the transistor in a circuit, though its workings would more closely resemble a brain synapse — the junction between two neurons. The synapse receives signals from one neuron, in the form of ions, and sends a corresponding signal to the next neuron.
A transistor in a conventional circuit transmits information by switching between one of only two values, 0 and 1, and doing so only when the signal it receives, in the form of an electric current, is of a particular strength. In contrast, a memristor would work along a gradient, much like a synapse in the brain. The signal it produces would vary depending on the strength of the signal that it receives. This would enable a single memristor to have many values, and therefore carry out a far wider range of operations than binary transistors.
Read full text: Scitech daily