Experts believe AI with the capacity to function without human assistance is set to happen, and soon.
With China’s development of AI tool DeepSeek for a fraction that it cost US AI companies, AI is becoming a more common tool for the technology industry to use.
With the commercialisation of AI comes dangers, such as hackers using AI generated content, deepfake videos or audio to trick unsuspecting people out of money, data, private commercial or personal information.
Robert Fry, a business analyst who specialises in using new technologies, said that 2025 is the “year of autonomous agents”.
Autonomous agents are AI computer programmes that can act on their own with little-to-no human assistance to achieve an objective.

Fry, who has a background in robotics process automation, said: “When those are more mainstream, let’s say in five or 10 years’ time, that’s when autonomous agents would be able to do what humans are doing.”
However, whilst AI presents many positive aspects, in terms of cost and time efficiencies, AI’s lack of regulation has caused concerns over how to implement it.
Kat Kuzmin, an analyst who specialises in protecting and securing company data and IT systems, said: “If you let AI truly be autonomous, how do we stop that from outsmarting us?”
Kuzmin added that there isn’t much in place in terms of having an “AI security best practice” within corporations to deal with potential dangers.
When asked about the conversation surrounding AI’s capabilities, Kuzmin said: “There is a nervousness around developing AI and developing its security, but there is also this real motivation to get AI going for business value and for business reputation.”
With the rate of growth in AI autonomy, Kuzmin added that the pace of growth in AI will potentially “outpace the capabilities of humans”.
However, laws such as the EU AI Act, which became functional last year has allowed for more regulations and guidance on how to safely and securely use AI in their practices.
Regarding the use of securing AI for GDPR purposes, Kuzmin said: “For anything you develop that’s new with technology, you have to be really balanced and you have to really understand the risk of what you’re willing to take, and I think nine times out of ten with AI you’re using personal data and that’s where it becomes really significant.”