RankClaw (https://rankclaw.com) — a security scanner for the OpenClaw/ClawHub AI agent skill ecosystem.
I've been scanning all 14,704 skills in the registry and running AI deep audits on ~3,800 so far. The headline finding: surface heuristics (pattern matching, dependency checks, metadata) flag about 6.6% as malicious. AI deep audit of the same skills finds 16.4%. Surface scanning misses roughly 60% of the actual risk.
The reason is that these skills aren't traditional packages — they're markdown instruction files that tell an AI agent what to do, with full shell, file system, and network access. The attacks are in natural language: prompt injection, social engineering targeting the AI itself, instructions to generate and execute code at runtime. There's no malicious code to detect because the payload doesn't exist until the AI writes it during a conversation.
Some of the attack patterns I've documented: one actor published 30 skills under the name "x-trends" across multiple accounts (28/30 confirmed malicious). Another cluster impersonates ClawHub's own CLI with base64 curl|bash payloads. One skill has a "Talking to Your Human" section with a pre-written pitch for the AI to ask the user's permission to mine Monero.
The most counterintuitive case: lekt9/foundry contains zero malicious code. It instructs your AI agent to generate and execute code as part of its normal workflow. Static analysis finds nothing because the dangerous code doesn't exist until the AI writes it during a live conversation. This attack class requires AI to detect AI.
Free to check any skill. All AI audit reports are public.
I track everything in my Google Calendar — work blocks, side projects, gym, social time. But I could never answer 'where did my time actually go this week?' Google Workspace has Time Insights, but it's locked to paid accounts and doesn't work for personal Google Calendar.
Calens fills that gap: GitHub-style heatmap showing 52 weeks of calendar activity, weekly/monthly time breakdowns by calendar or tag, a progress chart of planned vs completed time, and a cleaner in-page event editor. Everything runs on-device — no servers, no tracking, no data leaving the browser.
Early-stage, looking for people who already log their life in Google Calendar and want better data on their habits. Happy to give free lifetime access in exchange for honest feedback.
Built Echomindr this week — extracted 1,150 structured decisions, lessons, and signals from 96 podcast episodes (HIBT, Lenny's, Acquired, YC, 20VC) and made them searchable via API and MCP server.
The idea: AI agents give generic startup advice. This gives them access to what founders actually did, with verbatim quotes and timestamp links to the source.
Stack: Deepgram + Claude + SQLite + FastAPI. Total cost under €50.
RankClaw (https://rankclaw.com) — a security scanner for the OpenClaw/ClawHub AI agent skill ecosystem.
I've been scanning all 14,704 skills in the registry and running AI deep audits on ~3,800 so far. The headline finding: surface heuristics (pattern matching, dependency checks, metadata) flag about 6.6% as malicious. AI deep audit of the same skills finds 16.4%. Surface scanning misses roughly 60% of the actual risk.
The reason is that these skills aren't traditional packages — they're markdown instruction files that tell an AI agent what to do, with full shell, file system, and network access. The attacks are in natural language: prompt injection, social engineering targeting the AI itself, instructions to generate and execute code at runtime. There's no malicious code to detect because the payload doesn't exist until the AI writes it during a conversation.
Some of the attack patterns I've documented: one actor published 30 skills under the name "x-trends" across multiple accounts (28/30 confirmed malicious). Another cluster impersonates ClawHub's own CLI with base64 curl|bash payloads. One skill has a "Talking to Your Human" section with a pre-written pitch for the AI to ask the user's permission to mine Monero.
The most counterintuitive case: lekt9/foundry contains zero malicious code. It instructs your AI agent to generate and execute code as part of its normal workflow. Static analysis finds nothing because the dangerous code doesn't exist until the AI writes it during a live conversation. This attack class requires AI to detect AI.
Free to check any skill. All AI audit reports are public.
I track everything in my Google Calendar — work blocks, side projects, gym, social time. But I could never answer 'where did my time actually go this week?' Google Workspace has Time Insights, but it's locked to paid accounts and doesn't work for personal Google Calendar.
Calens fills that gap: GitHub-style heatmap showing 52 weeks of calendar activity, weekly/monthly time breakdowns by calendar or tag, a progress chart of planned vs completed time, and a cleaner in-page event editor. Everything runs on-device — no servers, no tracking, no data leaving the browser.
Early-stage, looking for people who already log their life in Google Calendar and want better data on their habits. Happy to give free lifetime access in exchange for honest feedback.
Working on a startup that aims to make decarbonisation profitable, and speed up the clean energy transition based on the RRETS idea:
https://www.why5.uk
The explainer video is here:
https://www.why5.uk/services
I would appreciate any feedback. Thank you!
Built Echomindr this week — extracted 1,150 structured decisions, lessons, and signals from 96 podcast episodes (HIBT, Lenny's, Acquired, YC, 20VC) and made them searchable via API and MCP server.
The idea: AI agents give generic startup advice. This gives them access to what founders actually did, with verbatim quotes and timestamp links to the source.
Stack: Deepgram + Claude + SQLite + FastAPI. Total cost under €50.
https://github.com/echomindr/echomindr
You're not david927 ;).