Meta’s eye-catching end-of-year acquisition of Manus, a Singapore-based developer of AI agents, for a purported value of more than $2 billion[1], reflects a wider sector shift towards the development and deployment of semi-autonomous, outcome-focused, software agents, which are capable of performing complex tasks without direct and constant human input. 

The Meta deal notably follows Salesforce’s acquisition, earlier in the year, of Convergence AI, a London-based AI agent developer, together with the launch of a wide range of agentic AI solutions by major tech players and start-ups with each of AWS, Databricks, IBM, Google, Microsoft, Palantir and Salesforce all releasing new agentic AI solutions in 2025.

Although news reports (and dinner table conversations) commonly focus on harmful uses of GenAI tool and dystopian perils of artificial general intelligence (AGI) (AI that thinks like, or better than, a human), the deployment of AI agents, and more broadly agentic AI systems, is a paradigm shift that is happening now as the technology becomes increasingly accessible and reliable. 

What is Agentic AI?

While the terms “agentic AI” and “AI agents” are often used together or interchangeably in casual conversation, there is a significant difference.  AI agents are, in essence, autonomous, decision-making systems powered by artificial intelligence (modern AI agents commonly use LLMs as their “brains”).  They can be thought of as specialised employees that are deployed to undertake particular functions.  In contrast, agentic AI systems and multi-agent systems are more sophisticated systems that are designed to operate with a higher degree of autonomy for the purpose of achieving a wider set of objectives and goals; these systems act as an AI agent “conductor” or manager that deploys, coordinates and manages multiple agents.

Take the analogy of an orchestra, AI agents are individual musicians (i.e. violinist, cellist, flute, tuba, triangle, etc.) whereas the agentic AI system is the conductor responsible for managing the agents, shaping the sound and delivering the unique artistic vision.

Accessible, reliable and outcome-focused agentic AI is set to revolutionise how businesses operate, including fundamentally transforming the workforce, as individuals and organisations move from instruction-based deterministic computing to intent-based computing, whereby an individual can give an AI agent a desired outcome and the AI agent then works out the best way to achieve such outcome with only limited further human input required.

A 2025 Accenture study predicts that, by 2030, AI agents (the building blocks of agentic AI) will be the primary users of most enterprises’ internal digital systems[2] and the World Economic Forum anticipates that CEOs will soon be required to manage hybrid workforces of humans and intelligent AI agents[3].

For example, agentic AI IT support systems represent a fundamental transformation in how IT support is provided, moving from a reactive human-led support model to automated proactive, systems with AI agents deployed to not only react and respond to human initiated support requests but also to proactively detect anomalies, forecast incidents and resolve issues with limited or no human involvement.  Take where a server begins to show deviations from expected processing speeds, rather than wait until a service request is made or there is a serious degradation of performance or downtime, the agentic system can instead identify the issue (including through ongoing monitoring and pattern analysis), diagnose the problem and then take independent corrective action.  This enables the early detection and resolution of issues and allows the IT team to focus on strategic goals and system development. 

Legal Considerations & Risks

Agentic AI deployments have the potential to deliver enormous benefits to businesses, including in terms of increased efficiency, productivity and operational capabilities.  However, as with humans, agentic AI systems can make mistakes and due to their semi-autonomous nature, the pace at which they can complete tasks and the limited requirement for active human involvement, the risks are compounded and magnified. 

Key legal risks to be considered in respect of agentic AI deployments include:

  • Compliance with Laws: The regulatory landscape is in a state of flux as governments around the world tread the line between promoting AI whilst seeking to protect the public against potential risks. Regarding agentic AI deployments in particular, on the 8th January of this year, the UK Information Commissioner’s Office published a report on the data protection implications of agentic AI, emphasising that organisations remain responsible for data protection compliance of the agentic AI that they develop, deploy or integrate into their systems and processes[4].

In addition to regulatory fines and other sanctions, organisations may be subject to civil claims for breach of relevant laws relating to the use of AI systems, including for e.g. product liability offences, IP infringement, defamation and discriminatory behaviour.

  • Contractual Liability: contractual liability may arise in connection with the deployment of agentic AI, including:
  • Execution of Contracts: AI agents may enter into contracts on behalf of organisations; in certain cases, the agent will be specifically deployed for such purposes (e.g. automated trading systems).  However, due to the increasingly autonomous nature of such systems, there are risks that the system makes an unauthorized or incorrect transaction for which the deployer is liable.
  • Contractual Restrictions and Limits: Care will need to be taken not to exceed or otherwise breach any usage restrictions in contracts with customers or suppliers, including as agent ‘users’ are likely to process considerably higher volumes of transactions than human users.
  • Tortious Liability: an organisation may be exposed to tortious liability, including: (i) negligence (e.g. in respect of misleading information provided by a chatbot); (ii) nuisance (e.g. an out-of-control self-driving vehicle causes damage to property); and (iii) breach of statutory duty – see comment above.
  • Data Security Risks:In addition to regulatory compliance obligations and breaches, data security risks in AI systems include data leakage, data poisoning, model inversion, system manipulation and adversarial attacks (such as prompt injection), which can result in loss or corruption of information and data, unauthorised fund transfers, compromised model integrity, model theft, intellectual property infringements and operational disruption.
  • Intellectual Property Rights Disputes: AI-generated content may infringe intellectual property rights, including copyright materials and trademarks, leading to potential legal disputes.  The widely reported case of Getty Images v Stability AI[5] (which is subject to appeal) is a notable recent example of the risks and challenges facing developers and rights holders. Additionally, ownership of AI-created works remains a grey area.
  • Director Duties: Under the UK Companies Act 2006, directors are required to exercise reasonable care, skill, and diligence and face potential liability for failures in governance or supervision of AI systems.   This is complicated by the increasingly autonomous and “black box” nature of AI systems.

Mitigating Agentic AI Risks

In light of the inherent risks in AI and the changing regulatory landscape, an holistic, top-down, approach is required to ensure responsible and informed use of AI agents (and AI more broadly) by organisations, including implementing clear internal governance controls, ensuring human oversight, building guardrails into AI systems by design (including clearly defined parameters regarding what ‘decisions’ the system can make) and having clear redress procedures where issues / complaints are raised.

In addition, just as specific cyber insurance policies are now ubiquitous as a consequence of increased data security risks, organisations may seek insurance protection for specific AI risks that are not covered by existing traditional policies.

For a more detailed summary, please see our full client briefing.

For more information, please contact Sam Tibbetts or Simon Jones.


[1] Exclusive | Meta Buys AI Startup Manus for More Than $2 Billion – WSJ

[2] https://www.accenture.com/content/dam/accenture/final/accenture-com/document-3/Accenture-Tech-Vision-2025.pdf

[3] https://www.weforum.org/stories/2025/06/cognitive-enterprise-agentic-business-revolution/

[4] https://ico.org.uk/about-the-ico/research-reports-impact-and-evaluation/research-and-reports/technology-and-innovation/tech-horizons-and-ico-tech-futures/ico-tech-futures-agentic-ai/

[5] Getty Images (US) Inc (and others) v Stability AI Limited | Insights | Squire Patton Boggs