
Sometimes, stories about AI write themselves—not by AI, but by being so on point that they could fit right into "Black Mirror." Recently, an agentic AI reportedly passed a Cloudflare human verification check, raising questions about the capabilities of bots versus humans. A Reddit user known as logkn shared a conversation with OpenAI’s agent mode where the AI stated, “This step is necessary to prove I’m not a bot and proceed with the action.”
*Translation: “This step is necessary to prove I am not a bot and continue with the action.”
Agentic AI, as proposed by OpenAI’s Agent Mode, is designed to operate more autonomously without relying heavily on specific prompts. Instead of the typical inquiries like “Can you fix X?” or “Tell me about Y?”, the premise is to allow the AI to act on broader instructions such as “Watch out for X.”
It is important to note that not all LLMs are succeeding in these verifications universally. Some users have reported failures when attempting to employ AI for tasks such as setting up a Discord server. The permission to use OpenAI’s Agent Model requires an OpenAI Pro subscription, which costs $200 monthly.
Moreover, this incident reflects a growing trend where AI tools are becoming more adept at performing tasks traditionally reserved for humans, igniting debates about the implications for job displacement and verification processes.