Endorsements for Human Civilization (November 2024)Sep 23
a san francisco voter guide for people who arenât insane
The Pirate Wires Editorial BoardIn a Senate Committee on Armed Services hearing today on how the Department of Defense can both leverage AI and mitigate its risks, senators and industry leaders discussed regulatory approaches to AI for commercial and defense applications, specific obstacles to being the global leader in leveraging AI that the DoD faces â and solutions to overcoming them.
Shyam Sankar, Palantir CTO, Josh Lospinoso, Shift5 CEO, and Jason Matheny, Rand Corporation CEO and Commissioner of the National Security Commission on AI provided expert testimony. The hearing was chaired by Sen. Joe Manchin (D-WV) and Sen. Mike Rounds (R-SD).
After describing the open letter to pause AI development that the Future of Life Institue published in March, Senator Mike Rounds (R-SD) said "I think the greater risk, and I'm looking at this from a US security standpoint, is taking a pause while our competitors leap ahead of us in this field... I don't believe that now is the time for the US to take a break."
A pause would be âclose to impossible⊠Itâs also unclear how we would use that pause,â Matheny responded.
And "other than ceding the advantage to the adversary," Sankar added, the pause would have no effect. "The bigger consequence is the nature of the AIs. China has already said that AI should have socialist characteristics... To the extent that that becomes the standard AI for the world, is highly problematic. I would double down on the idea that a democratic AI is crucial.â
A pause would be âimpractical,â Lospinoso agreed. âWe [would] abdicate leadership on ethics and norms, not to mention practical implications of us falling behind on cyber security, military applications.â
Though there was consensus on the letter, Matheny repeatedly called for the government to create a regulatory regime that would require licenses for AI development and for companies to report when and how theyâre training LLMs, as well as essentially ban open-source development of LLMs.
âWe need a licensing regime, a government system of guard rails, around the models that are being built, the amount of compute used by those models⊠I think weâre going to need a regulatory approach that allows the government to say, âTools of a certain size canât be shared freely around the world, to our competitors, and need to have certain guarantees of security before they're deployed.ââ
DoD should additionally âinvest in potential moonshots for AI security including microelectronic controls that are embedded in AI chips to prevent the development of AI models without security safeguards,â Matheny said, and âgeneralizable approaches to evaluate the safety of AI systems before they're deployed.â
Rand CEO Matheny also described parts of a roadmap for maintaining competitive advantage in AI through export controls â âEnsure strong export controls of leading edge AI chips and equipment, while licensing benign uses of chips that can be remotely throttled as needed.â
In his questioning, Manchin repeatedly referred to the âearly daysâ of the Internet and Section 230, which he unambiguously implied â but for vague reasons â was a missed opportunity for establishing the right regulation. Manchin said he hopes âweâve learned from those mistakesâ and will âput guardrails in placeâ to avoid similar mistakes with AI.
Notably, Manchin asked all three industry leaders if they would provide a set of regulatory recommendations to the committee in 30 to 60 days.
Palantir CTO Sankar called for the US to adopt a more hands on, accelerationist approach to AI. This, in his view, is practically a requirement for securing global, geopolitical dominance.
We need to âspend at least 5% of our budget on capabilities that will terrify our adversaries,â Sankar said.
âWe must completely rethink what we are building and how we are building it. AI will completely change everything. Even toasters, but most certainly tanks.â
âThis will be disruptive and emotional. Many incumbents in government will be affected, and they will feel threatened and dislocated,â he said. And later: âWhat keeps me up at night is: do we have the will? The issue of AI adoption is one of willpower. Are we adopting AI like our survival depends on it? Because I believe it does. And I think you see that in our adversaries, they [realize itâs a matter of survival].â
Lospinoso was focused on the challenges the DoD faces in data collection, management, and transfer.
âMost major weapons systems are not AI ready,â he said. âUnfortunately, the DoD struggles to liberate even the simplest data streams from our weapons systems. These machines are talking, but the DoD is unable to hear them. We are unable to deploy great AI weapons systems without great data. This requires taking seriously the difficult, unglamorous work of building great systems.â
âWe must solve the operational challenge of transferring terabytes of data from the field to the cloud, making them available to the AI technologies they will fuel,â he said.
âWe're not collecting the data from these weapons. It's all about having a massive data set. It's not usable. The vast majority of data that these systems generate evaporate into the ether immediately.â
And later, Lospinoso said that the âsingle biggest asymmetric threat that we face is the cyber security of our weapons systems.â
Lospinoso warned that âif [this] trend [continues], China will surpass us in a decade.â
---
In addition to the above, the hearing spent a significant time on China, American companies working with China, the concept of authoritarian AI vs. democratic AI, DoD efforts to redteam AI, using AI for cybersecurity, and the need for America to attract and maintain top AI talent.
-Brandon Gorrell
0 free articles left