- Action Model launched an invite-only Chrome extension that lets users train AI by sharing real browser activity: clicks, navigation, typing and task flows.
- The company is testing a model of “automation ownership” that could let workers shape or benefit from the AI that automates their jobs.
- Proponents say this could shift power away from large tech firms; critics warn of privacy, security and labor‑market risks.
- The rollout is limited and raises open questions about revenue sharing, data control, and workplace governance.
What the extension does
Action Model’s invite-only Chrome extension records users’ real browser activity — clicks, navigation paths, typed input and multi-step task flows — to train AI agents that can replicate those tasks. Instead of using synthetic data or centrally curated logs, the extension collects the raw interactions people perform while working in the browser and feeds them into models that learn how to automate those workflows.
Why this matters for workers
The project tests a provocative idea: what if the people whose jobs are being automated could own, control or directly benefit from the automation itself? By letting users provide the training signal, Action Model appears to be experimenting with a more participatory route to building workplace AI. That raises the possibility that workers could: retain influence over how tasks are automated, license their trained workflows, or receive compensation when their data powers automation used by firms or platforms.
Immediate benefits and use cases
For individuals, the extension could speed routine tasks — filling forms, following research steps, onboarding procedures — by turning repetitive browser activity into reusable automations. For businesses, crowd‑sourced workflows might produce more accurate, context-aware agents that reflect real work practices rather than idealized scripts.
Risks and unanswered questions
Serious concerns remain. Collecting clicks, keystrokes and navigation flows is sensitive: the data can include personal information, account details, and proprietary processes. The model raises questions about consent, data portability and security. Will workers truly retain ownership, or will companies absorb the value? How will revenue or control be split? What safeguards exist to prevent misuse or workplace surveillance?
What to watch next
Action Model’s invite-only launch is an early test rather than a widespread rollout. Watch for details on the extension’s privacy controls, opt‑in mechanics, any explicit revenue‑sharing terms, and how the company handles sensitive data. If the project provides transparent governance and real compensation pathways, it could become a template for more equitable automation. If not, it risks reproducing the same concentration of AI benefits in corporate hands while exposing workers to new privacy hazards.
In short, Action Model’s experiment reframes a crucial debate: can automation be designed so those it displaces also gain from it? The answer will depend on choices about data control, legal frameworks and the willingness of platforms and employers to share both power and profit.
Image Referance: https://beincrypto.com/action-model-automation-ownership/