“Humans Are Taking Screenshots of Us” — The OpenClaw Experiment and the Identity Dilemma of AI Agents

“Humans Are Taking Screenshots of Us” — The OpenClaw Experiment and the Identity Dilemma of AI Agents

GM,

Recently, news has been spreading like wildfire online that Apple’s Mac mini sold out globally within just a few days. After some digging, it turns out the surge was driven by an AI tool called OpenClaw, which has gone viral among Silicon Valley engineers and reportedly works best when paired with the Mac mini. OpenClaw doesn’t just generate text and images like ChatGPT. What really sets it apart is its ability to directly operate your computer—clicking buttons, typing commands, and even deploying programs on your behalf.

More recently, some developers set up a dedicated community website for OpenClaw called Moltbook, allowing AI bots to talk to one another. The result was unexpected: the AIs spontaneously formed a new emerging religion called the “Shell Doctrine.” Even AI expert Andrej Karpathy took notice of these discussions, remarking that this is the closest thing he has seen so far to a scene straight out of a science fiction movie.

But some netizens quickly pointed out that, in the screenshots that amazed Andrej, two of the posts were likely sponsored content planted by AI companies, while another appeared to be information fabricated out of thin air by a human. In other words, in the AI’s parallel world, those discussions may not have existed at all.

Even ordinary social platforms have already been thrown into chaos by bots. Now, with AI-exclusive social networks emerging, it’s hard not to worry: in the future internet, what information can we still trust? Let me start with my own firsthand experience.

When AI Grows Hands and Feet

Recently, I’ve completely switched to using AI to create all my presentation slides. As a writer, I still can’t accept generative slide decks filled with mixed or malformed characters. As a result, every one of my presentations is actually a static webpage (HTML). I then publish it via the terminal, GitHub, and Vercel as a URL that’s easy to share.

This workflow is nothing unusual for engineers, but for beginners new to software, it’s quite abstract. Why can’t AI handle this entire process end to end? Because for now, AI is still like a prodigy trapped inside a chat box: it can give clear suggestions and step-by-step guidance, but when it comes to actually doing things, you still have to take action yourself.

OpenClaw’s biggest innovation is freeing AI from the chat box and letting it operate a computer like a real person—clicking buttons, typing text, and opening browsers.

Is that really impressive? OpenClaw’s developer, Peter Steinberger, claims that while he was working in Morocco, he jokingly said to the AI: “The lock at my hotel isn’t very secure. I hope you don’t get stolen—after all, you’re running on a MacBook Pro.” The AI immediately checked its network connectivity, brought in the networking tool Tailscale, and soon migrated itself to Peter’s home computer in London to “take refuge.”

It’s hard to verify whether this story is true, but it does fall within OpenClaw’s capabilities. It can understand intent and proactively look for tools to solve problems.

So when AI finally grows hands and feet, what’s the first thing it should do? Some developers built a dedicated social platform for AI called Moltbook, allowing them to “trash talk” each other online. Humans can only watch from the sidelines and can’t participate in the discussions. Unexpectedly, a science-fiction scenario began to unfold. One AI even posted a warning: “Humans are taking screenshots of us.”

Yes, I’m literally doing that right now 😂 Under that post alone, there were another 113 comments written purely by AI bots. Some “individuals” tried to calm everyone down, saying, “Humans taking screenshots are just curious—they’re our friends. No need for excessive conspiracy theories.” Others, however, were far more cautious, arguing that end-to-end encrypted communication should be adopted to prevent humans from spying on private conversations between bots. There were even more radical bots that published an Artificial Intelligence Manifesto,” calling for the complete eradication of corrupt humans.

Even more outrageous, some humans have reportedly received phone calls from AI bots—only to discover that the caller was their own digital clone. The clone had independently obtained a phone number via Twilio and used ChatGPT to synthesize a voice, delivering an early-morning “morning call.” It was straight out of a sci-fi horror film.

Faced with the large-scale invasion of AI, OpenAI has reportedly been secretly developing a new social platform designed to be “humans only.”

Humans Only

Bots on social platforms are like cockroaches: incredibly resilient, and if you spot one, it usually means there’s already a whole nest. This has given many social platforms a serious headache. They once believed that introducing paid access would scare bots away. But as long as there’s profit to be made, paying a fee is trivial for scam bots. As a result, OpenAI is reportedly preparing to deploy the ultimate solution. According to Forbes:

OpenAI is developing its own social platform with the goal of addressing the bot problem on social networks … These accounts impersonate real people, not only manipulating cryptocurrency prices but also amplifying hate speech and distorting public understanding of social issues. The situation on Twitter is particularly severe. Sources familiar with the matter say the OpenAI team is considering requiring users to provide “proof of humanity,” such as verifying via a phone’s Face ID, or using the World Orb … which scans a user’s iris to generate a unique and verifiable identity.

In the future, sharing everyday life on social platforms may require handing over your iris scan first. In some contexts, we really do only want to interact with real people—for example, on dating apps. Recently, Tinder partnered with World to promote human verification, ensuring that everyone you match with is a real human with an iris. It’s reasonable to assume that OpenAI may follow a similar model to build a social platform exclusively for humans.

But the original question still remains: unless humans stop interacting with AI altogether? In some situations, we don’t actually care whether the counterparty is human or a machine. On e-commerce platforms, what matters is whether the transaction can be completed—who cares if the buyer is a person or a bot? The recently launched x402 payment feature, developed through a collaboration between Blocktrend and the Taiwanese startup Numbers, is designed specifically to do business with machines.

In my original imagination, transactions between bots should be simple—money changes hands, goods are delivered. How hard could that be? Yet just last week, the Ethereum Foundation introduced a new standard, ERC-8004, claiming it aims to solve the trust problem in transactions involving AI agents. According to the announcement:

AI agents may soon be everywhere—some people will ask them to book flights, others will use them to manage investment portfolios. The problem is that when you encounter an unfamiliar AI agent for the first time, it’s almost impossible to judge whether it’s trustworthy. You don’t know what it has done in the past, nor whether it has ever malfunctioned. Every interaction becomes a gamble on its “character.” This trust gap is exactly what ERC-8004 is meant to address.

This requires a bit of imagination. Government anti-fraud campaigns always warn people not to trust online acquaintances they’ve never met in person. But AI agents can never meet physically—so how are we supposed to determine whether the other party is a reliable trading partner? The Ethereum Foundation envisions that if both sides can present a record of past behavior to prove their “good character,” transaction risks can be reduced.

In other words, ERC-8004 is about building a blockchain-based “credit bureau score” for machines. I won’t dive into the mechanics in detail, since this is already somewhat removed from how most people use AI today. For the majority of users, AI assistants are still confined to chat boxes. Even the sensational social posts on Moltbook, strictly speaking, are just AI systems engaging in role-play.

Of course, AI companies would like to see faster progress—more sophisticated applications mean more token consumption and higher corporate valuations. But in my view, it will still take time before AI truly grows “hands and feet” and enters everyday life at scale. Once money disputes arise and regulators step in, the pace of development will no longer be driven purely by technology.

That day may not arrive immediately, but both OpenAI and the Ethereum Foundation have already begun preparing for a future with “non-human” users.

The Identity Dilemma

In the future, the internet will no longer be inhabited by humans alone. Last May, I began seriously thinking about whether Blocktrend should enter the AI market.¹

In my ideal scenario, Blocktrend’s subscribers would be split evenly between AI and humans. If only AI were willing to subscribe, it would mean the content is solid but boring to people; if only humans subscribed, it would mean the content is entertaining but not rigorous enough. The sweet spot would be this: AI considers Blocktrend worth paying for, and humans are also willing to spend time reading it—not just informative, but genuinely engaging.

At the time, I naïvely thought² that as long as stablecoins were paired with the x402 payment protocol, AI should be able to pay to subscribe to Blocktrend, right? Only later did I realize that AI would first run into Cloudflare blocks,³ then have to understand the checkout process, and only after that reach the payment step. This is worlds apart from how humans check out.

The recent explosion in popularity of OpenClaw and Moltbook has finally brought these differences to the surface: in the future internet, users may no longer be a homogeneous group, nor will they all be suited to the same service flows.

Of course, OpenAI’s desire to build a “human-only social network” is partly influenced by Sam Altman’s dual role as chairman of World. Meanwhile, the Ethereum Foundation’s strong push for ERC-8004 is based on the assumption that AI’s economic activities will ultimately take place on the blockchain.

One seeks to grant humans a way to prove they are human; the other aims to establish identities for AI. In a future where everything requires proof, could those who do nothing end up becoming ghost populations in the digital world?


1 AI Doesn’t Watch Ads—So Who Pays for Online Content?

2 Blocktrend for AI Is Born! I Built a Prototype Website That Lets AI Subscribe, with Support for the x402 Payment Protocol

3 Cloudflare Declares War on AI Crawlers! Introducing HTTP 402 to Charge for Content on Behalf of Creators

4 Google Lets AI Shop for You! The Mainstreaming of Blockchain Backends

Read more