In weapons systems, as in many other areas of life, Artificial Intelligence is being heralded as “the future for all humankind”. This description is part of the problem: it comprises a submission to a fatalistic view of the future in which we are all information organisms (“inforgs”). It does not have to be this way but the omens are discouraging: understanding the nature of the problem is the first step to countering it.
The question above is asked by Kym Bergmann in the ASPI publication, The Strategist. The argument is that Australian Defence planners have become increasingly nervous as a result of “a number of emerging disruptive technologies that will have a profound effect on military operations in the very near future. . . . These include, but aren’t limited to: artificial intelligence (AI) and machine learning; micro uninhabited aerial systems (UAS); quantum computing; hypersonics; micro ‘cube’ satellites and matching launch technologies; uninhabited underwater systems; the vastly increasing power of conventional explosives utilising nanotechnology; and information operations and cyber warfare.” According to Bergmann, “some Australian planners can also foresee the possibility that emerging disruptive technologies could leave the ADF extraordinarily vulnerable”.
The cited sources of this alarm appear to be the RAAF’s signature air power conference held earlier this year and, regretfully, a short article by Henry Kissinger, The Atlantic, which concluded that “these sorts of developments in AI mark the end of the Age of Enlightenment”. [“Regretfully” for two reasons: first, because there are far better sources on the questions raised than this vignette, John R. Searle being one; and second, on the basis of Kissinger’s career as enthusiastic brothel-keeper to the American imperial imaginary, I am fully in agreement with the late Christopher Hitchens: the only published works we should be reading of Kissinger are his prison notebooks.]
These reservations to one side, Bergmann is on to something important but has reduced the significance of it to the vulnerability of the ADF’s weapons platforms, current and envisaged. And even overriding this, of course, is the blasé assumption in the title: that there are “right wars” despite Australia’s post-WW II involvement in a series of US-orchestrated campaigns that were either unnecessary, illegal and unethical, costly, and failed, or all four. Again, regretfully, what is missing is any sense that something is missing.
Despite these additional misgivings, it is appropriate to return to the focus of the article. While, yes, the vulnerability referred to is important, it is nowhere near as important as the significance of, for example, the excoriations wreaked by an increasingly uncritical celebration of human dependency on data and a hubristic confidence in computation based on collections of data.
More specifically still, the first order questions which relegate those concerning the vulnerability of the ADF’s weapons platforms to a lower order of significance are those related to the advent of artificial intelligence, its embrace by those with influence and power, and its consequences for democratic politics or, rather, those superficial vestiges of this declaratory state which are curated for use by a passive population during occasional participatory rituals such as elections.
Of particular significance in the present context is the vision of technological symbiosis between AI and the weapons systems listed above – the foremost being the imposition of chrono-politics on national security policy-making and decision-making. Strictly speaking, the term chrono-politics is a misnomer: the time between threat identification/ confirmation and decision for so many of these weapons is simply not human and thus certainly not democratic, if by that we mean a minimal period in which informed discussion, debate, consensus and consent can take place. If you doubt this, ask, “how do you experience a nanosecond?”
Accordingly, if the circumstances do not permit exiting the situation, the only solution will be to delegate. The explanatory logic will once again emphasise that “everyone is doing it” (which for advanced states is largely true), that there is no holding back scientific and technological progress, and thus, TINA! (There Is No Alternative). The transit to a technologically determined and reduced non-politics is thereby achieved by what Norbert Weiner dismissed as a decision by “gadget worshippers”. Experiencing the illusion of control, they defer to the authority of gadgets and effectively conduct a discourse that is both gadget-enabled and gadget-enfeebled.
While the machines involved will compute vast amounts of data at superhuman rates, they will, at the same time, proceed with the quintessential disqualifications of the computer-as-gadget and computer science as faux science – namely their gross cognitive deficiencies. Simply put, in order to “work” the machine must be programmed to ask only certain types of questions, and to accept certain types of data.
To be clear, computers do not, in human terms and inter alia, think, reason, process information, perceive and decide any more than does a slide-rule; nor do they possess beliefs, desires, ethical sense, motivation or even, it must be emphasised, intelligence. Also to be emphasised is that they are created by conscious agents, as is the information they process and provide. And any survey of the literature reveals that computer engineering has no understanding of human consciousness.
Inevitably, then, it is a device designed by fallible, unaccountable humans to restrict options according to a binary reduction of the world and the reigning abusive simplifications and synoptic abstractions – common examples being the broadly perpetrated fictions of Rational Economic Man and Homo Strategicus. The former gave us humans-as-calculating predators, and the latter the destruction of Vietnam according to the calculations of Robert Strange McNamara.
This is not to deny the benefits of computers in certain fields but it is to state categorically that, by virtue of its design, it is anti-political in general, and undemocratic in particular, from the outset. And to the extent that such a world view based on instrumental reason finds ambiguity intolerable, it is incipiently fascistic.
The overall danger, therefore, is not just that some Australian platforms will be rendered obsolete – which they probably will – but that Australia will capitulate to this variant of technophilia without thinking through the likely consequences. Sadly, the indications are already evident. Australia’s alliance with the United States impels automaticity, and Richard Tanter has outlined the dimensions. In January, the Chief of the Australian Army, Lieutenant-General Angus Campbell, argued for the necessary e-resources, and in May, the chief of the Australian Defence College, Major-General Mick Ryan, went further with a proposal to turn “riflemen into robot wranglers” – empowering an individual soldier to control up to ten robots, preferably those of a more, rather than less, autonomous nature. He recommends university involvement in the requisite AI and notes that all of the G8 universities have AI programmes and collaboration is under way.
While full credit must be given to both of these senior officers for stating that these developments demand a corresponding ethic, technology’s history so often records technology outstripping ethics and law. Contemporary advances, provide a pessimistic prospect.
The Pentagon is not only demanding that its AI Centre be operational by the end of 2018, a report by the Rand Corporation, a think-tank long favoured by the US military, has concluded that its use in emerging weapons systems could very well increase the risk of nuclear war.
From 1982 to 1988, Michael McKinley taught diplomacy, international relations and strategy in the Department of Politics at UWA. From 1988 to 2014 he taught diplomacy, international relations and strategy at the ANU. He is currently a member of the Emeritus Faculty at the ANU.