British AI startup with government ties is developing tech for military drones | Artificial intelligence (AI)


A company that has worked closely with the UK government on artificial intelligence security, the NHS and education is also developing AI for military drones.

Consulting firm Faculté AI has “experience in developing and deploying AI models on drones,” or unmanned aerial vehicles, according to a defense industry partner company.

Faculty has become one of the most active companies selling AI services in the UK. Unlike OpenAI, Deepmind or Anthropic, it does not develop models itself, but rather focuses on reselling models, particularly from OpenAI, and advising on their use in government and industry.

The faculty gained particular prominence in the UK after working on data analysis for the Vote Leave campaign ahead of the Brexit vote. Boris Johnson’s former adviser, Dominic Cummings, then handed government tasks to the Faculty during the pandemic and included its chief executive, Marc Warner, in meetings of the government’s scientific advisory committee.

Since then, the company, officially called Faculty Science, has tested AI models on behalf of the UK government’s AI Safety Institute (AISI), established in 2023 under former Prime Minister Rishi Sunak.

Governments around the world are scrambling to understand the security implications of artificial intelligence, after rapid advances in generative AI sparked a wave of hype around its possibilities.

Weapons makers are potentially interested in installing AI on their drones, ranging from “loyal wingmen” that could fly alongside fighter jets to roaming munitions already capable of waiting for targets to appear before shooting at them.

The latest technological developments have raised the prospect of drones capable of tracking and killing without a human “in the know” making the final decision.

In a press release Announcing a partnership with London-based faculty, British startup Hadean wrote that the two companies are working together on “subject identification, tracking object movements, and exploring the development, deployment, and operations of autonomous swarming”.

It is understood that the Faculty’s work with Hadean did not include targeting weapons. However, the Faculty did not respond to questions about whether it was working on drones capable of applying lethal force, nor provide further details about its defense work, citing confidentiality agreements.

A spokesperson for the Faculty said: “We are helping to develop new AI models that will help our defense partners create safer and more robust solutions,” adding that it has “ethical policies and rigorous internal processes” and follows the Home Office’s AI ethics guidelines. Defense.

The spokesperson said the faculty had a decade of experience in AI security, including combating child sexual abuse and terrorism.

The Scott Trust, the ultimate owner of the Guardian, is an investor in Mercuri VC, formerly GMG Ventures, which is a minority shareholder in Faculty.

The faculty, led by Executive Director Marc Warner, continues to work closely with AISI. Photography: arutoronto/Faculty AI

“We have been working on AI security for a decade and are global experts in this area,” the spokesperson said. “That’s why governments and model developers trust us to keep AI secure at borders, and defense customers to ethically apply AI to help keep citizens safe. “

Many experts and policymakers have called for caution before introducing more autonomous technologies into the military. In 2023, a House of Lords committee called on the UK government to attempt to conclude a non-binding treaty or agreement to clarify the application of international humanitarian law in relation to deadly drones. In September, the Green Party called for laws to completely ban lethal autonomous weapons systems.

The faculty continues to work closely with the AISI, putting it in a position where its judgments could influence UK government policy.

In November, AISI charged the faculty with investigating how major language patterns “are used to facilitate criminal or otherwise undesirable behavior.” AISI said the contract winner – the Faculty – “will be an important strategic collaborator of the AISI safeguarding team, directly contributing key insights to AISI’s systems security models”.

ignore previous newsletter promotion

The company is working directly with OpenAI, the startup behind the latest wave of enthusiasm for AI, to use its ChatGPT model. Experts have already raised concerns about a potential labor dispute in the work the faculty has done with AISI, according to Politico, a news site. The faculty did not detail which business models it had tested, although it did test OpenAI’s o1 model before its release.

The government has already said about the work of the AI ​​Faculty for AISI: “essentially, they are not in conflict because of the development of their own model.”

Green Party colleague Natalie Bennett said: “The Green Party has long expressed serious concerns about the ‘revolving door’ between industry and government, raising issues of gas company staff being seconded to work on policy energy with former defense ministers. work for arms companies.

“The fact that a single company has both accepted a large number of government contracts to work on AI while also working with the AI ​​Safety Institute to test large language models is a serious concern – not so much “poacher turned gamekeeper” than assuming both roles. at the same time.”

Bennett also stressed that the UK government “has not yet made a full commitment” to ensuring there is a human in the pipeline for autonomous weapons systems, as recommended by the Lords committee.

Faculty, whose largest shareholder is a Guernsey-registered holding company, has also sought to maintain close ties with the British government, winning contracts worth at least £26.6 million, according to reports governmental. These include contracts with the NHS, the Department of Health and Social Care, the Department for Education and the Department for Culture, Media and Sport.

These contracts represent a significant source of income for a company which had turnover worth £32 million as of March 31. I lost £4.4million in that period.

Albert Sanchez-Graells, professor of economic law at the University of Bristol, warned that the UK relies on “restraint and responsibility” from tech companies in developing AI.

“Companies supporting AISI’s work must avoid organizational conflicts of interest arising from their work for other parts of government and broader market-based AI activities,” Sanchez-Graells said.

“Companies with a portfolio of AI activities as broad as the Faculty need to answer questions about how they ensure their advice to AISI is independent and impartial, and how they avoid taking advantage of this knowledge in their other activities.”

The Ministry of Science, Innovation and Technology declined to comment, saying it would not go into detail about individual commercial contracts.

Leave a Reply

Your email address will not be published. Required fields are marked *