Initial commit - LettaBot multi-channel AI assistant

Co-authored-by: Cameron Pfiffer <cameron@pfiffer.org>
Co-authored-by: Caren Thomas <carenthomas@gmail.com>
Co-authored-by: Charles Packer <packercharles@gmail.com>
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
This commit is contained in:
Sarah Wooders
2026-01-28 18:02:51 -08:00
commit 22770e6e88
133 changed files with 21947 additions and 0 deletions

View File

@@ -0,0 +1,53 @@
---
name: 1password
description: Set up and use 1Password CLI (op). Use when installing the CLI, enabling desktop app integration, signing in (single or multi-account), or reading/injecting/running secrets via op.
homepage: https://developer.1password.com/docs/cli/get-started/
metadata: {"clawdbot":{"emoji":"🔐","requires":{"bins":["op"]},"install":[{"id":"brew","kind":"brew","formula":"1password-cli","bins":["op"],"label":"Install 1Password CLI (brew)"}]}}
---
# 1Password CLI
Follow the official CLI get-started steps. Don't guess install commands.
## References
- `references/get-started.md` (install + app integration + sign-in flow)
- `references/cli-examples.md` (real `op` examples)
## Workflow
1. Check OS + shell.
2. Verify CLI present: `op --version`.
3. Confirm desktop app integration is enabled (per get-started) and the app is unlocked.
4. REQUIRED: create a fresh tmux session for all `op` commands (no direct `op` calls outside tmux).
5. Sign in / authorize inside tmux: `op signin` (expect app prompt).
6. Verify access inside tmux: `op whoami` (must succeed before any secret read).
7. If multiple accounts: use `--account` or `OP_ACCOUNT`.
## REQUIRED tmux session (T-Max)
The shell tool uses a fresh TTY per command. To avoid re-prompts and failures, always run `op` inside a dedicated tmux session with a fresh socket/session name.
Example (see `tmux` skill for socket conventions, do not reuse old session names):
```bash
SOCKET_DIR="${CLAWDBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/clawdbot-tmux-sockets}"
mkdir -p "$SOCKET_DIR"
SOCKET="$SOCKET_DIR/clawdbot-op.sock"
SESSION="op-auth-$(date +%Y%m%d-%H%M%S)"
tmux -S "$SOCKET" new -d -s "$SESSION" -n shell
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op signin --account my.1password.com" Enter
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op whoami" Enter
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op vault list" Enter
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
tmux -S "$SOCKET" kill-session -t "$SESSION"
```
## Guardrails
- Never paste secrets into logs, chat, or code.
- Prefer `op run` / `op inject` over writing secrets to disk.
- If sign-in without app integration is needed, use `op account add`.
- If a command returns "account is not signed in", re-run `op signin` inside tmux and authorize in the app.
- Do not run `op` outside tmux; stop and ask if tmux is unavailable.

View File

@@ -0,0 +1,29 @@
# op CLI examples (from op help)
## Sign in
- `op signin`
- `op signin --account <shorthand|signin-address|account-id|user-id>`
## Read
- `op read op://app-prod/db/password`
- `op read "op://app-prod/db/one-time password?attribute=otp"`
- `op read "op://app-prod/ssh key/private key?ssh-format=openssh"`
- `op read --out-file ./key.pem op://app-prod/server/ssh/key.pem`
## Run
- `export DB_PASSWORD="op://app-prod/db/password"`
- `op run --no-masking -- printenv DB_PASSWORD`
- `op run --env-file="./.env" -- printenv DB_PASSWORD`
## Inject
- `echo "db_password: {{ op://app-prod/db/password }}" | op inject`
- `op inject -i config.yml.tpl -o config.yml`
## Whoami / accounts
- `op whoami`
- `op account list`

View File

@@ -0,0 +1,17 @@
# 1Password CLI get-started (summary)
- Works on macOS, Windows, and Linux.
- macOS/Linux shells: bash, zsh, sh, fish.
- Windows shell: PowerShell.
- Requires a 1Password subscription and the desktop app to use app integration.
- macOS requirement: Big Sur 11.0.0 or later.
- Linux app integration requires PolKit + an auth agent.
- Install the CLI per the official doc for your OS.
- Enable desktop app integration in the 1Password app:
- Open and unlock the app, then select your account/collection.
- macOS: Settings > Developer > Integrate with 1Password CLI (Touch ID optional).
- Windows: turn on Windows Hello, then Settings > Developer > Integrate.
- Linux: Settings > Security > Unlock using system authentication, then Settings > Developer > Integrate.
- After integration, run any command to sign in (example in docs: `op vault list`).
- If multiple accounts: use `op signin` to pick one, or `--account` / `OP_ACCOUNT`.
- For non-integration auth, use `op account add`.

View File

@@ -0,0 +1,50 @@
---
name: apple-notes
description: Manage Apple Notes via the `memo` CLI on macOS (create, view, edit, delete, search, move, and export notes). Use when a user asks Clawdbot to add a note, list notes, search notes, or manage note folders.
homepage: https://github.com/antoniorodr/memo
metadata: {"clawdbot":{"emoji":"📝","os":["darwin"],"requires":{"bins":["memo"]},"install":[{"id":"brew","kind":"brew","formula":"antoniorodr/memo/memo","bins":["memo"],"label":"Install memo via Homebrew"}]}}
---
# Apple Notes CLI
Use `memo notes` to manage Apple Notes directly from the terminal. Create, view, edit, delete, search, move notes between folders, and export to HTML/Markdown.
Setup
- Install (Homebrew): `brew tap antoniorodr/memo && brew install antoniorodr/memo/memo`
- Manual (pip): `pip install .` (after cloning the repo)
- macOS-only; if prompted, grant Automation access to Notes.app.
View Notes
- List all notes: `memo notes`
- Filter by folder: `memo notes -f "Folder Name"`
- Search notes (fuzzy): `memo notes -s "query"`
Create Notes
- Add a new note: `memo notes -a`
- Opens an interactive editor to compose the note.
- Quick add with title: `memo notes -a "Note Title"`
Edit Notes
- Edit existing note: `memo notes -e`
- Interactive selection of note to edit.
Delete Notes
- Delete a note: `memo notes -d`
- Interactive selection of note to delete.
Move Notes
- Move note to folder: `memo notes -m`
- Interactive selection of note and destination folder.
Export Notes
- Export to HTML/Markdown: `memo notes -ex`
- Exports selected note; uses Mistune for markdown processing.
Limitations
- Cannot edit notes containing images or attachments.
- Interactive prompts may require terminal access.
Notes
- macOS-only.
- Requires Apple Notes.app to be accessible.
- For automation, grant permissions in System Settings > Privacy & Security > Automation.

View File

@@ -0,0 +1,67 @@
---
name: apple-reminders
description: Manage Apple Reminders via the `remindctl` CLI on macOS (list, add, edit, complete, delete). Supports lists, date filters, and JSON/plain output.
homepage: https://github.com/steipete/remindctl
metadata: {"clawdbot":{"emoji":"⏰","os":["darwin"],"requires":{"bins":["remindctl"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/remindctl","bins":["remindctl"],"label":"Install remindctl via Homebrew"}]}}
---
# Apple Reminders CLI (remindctl)
Use `remindctl` to manage Apple Reminders directly from the terminal. It supports list filtering, date-based views, and scripting output.
Setup
- Install (Homebrew): `brew install steipete/tap/remindctl`
- From source: `pnpm install && pnpm build` (binary at `./bin/remindctl`)
- macOS-only; grant Reminders permission when prompted.
Permissions
- Check status: `remindctl status`
- Request access: `remindctl authorize`
View Reminders
- Default (today): `remindctl`
- Today: `remindctl today`
- Tomorrow: `remindctl tomorrow`
- Week: `remindctl week`
- Overdue: `remindctl overdue`
- Upcoming: `remindctl upcoming`
- Completed: `remindctl completed`
- All: `remindctl all`
- Specific date: `remindctl 2026-01-04`
Manage Lists
- List all lists: `remindctl list`
- Show list: `remindctl list Work`
- Create list: `remindctl list Projects --create`
- Rename list: `remindctl list Work --rename Office`
- Delete list: `remindctl list Work --delete`
Create Reminders
- Quick add: `remindctl add "Buy milk"`
- With list + due: `remindctl add --title "Call mom" --list Personal --due tomorrow`
Edit Reminders
- Edit title/due: `remindctl edit 1 --title "New title" --due 2026-01-04`
Complete Reminders
- Complete by id: `remindctl complete 1 2 3`
Delete Reminders
- Delete by id: `remindctl delete 4A83 --force`
Output Formats
- JSON (scripting): `remindctl today --json`
- Plain TSV: `remindctl today --plain`
- Counts only: `remindctl today --quiet`
Date Formats
Accepted by `--due` and date filters:
- `today`, `tomorrow`, `yesterday`
- `YYYY-MM-DD`
- `YYYY-MM-DD HH:mm`
- ISO 8601 (`2026-01-04T12:34:56Z`)
Notes
- macOS-only.
- If access is denied, enable Terminal/remindctl in System Settings → Privacy & Security → Reminders.
- If running over SSH, grant access on the Mac that runs the command.

View File

@@ -0,0 +1,79 @@
---
name: bear-notes
description: Create, search, and manage Bear notes via grizzly CLI.
homepage: https://bear.app
metadata: {"clawdbot":{"emoji":"🐻","os":["darwin"],"requires":{"bins":["grizzly"]},"install":[{"id":"go","kind":"go","module":"github.com/tylerwince/grizzly/cmd/grizzly@latest","bins":["grizzly"],"label":"Install grizzly (go)"}]}}
---
# Bear Notes
Use `grizzly` to create, read, and manage notes in Bear on macOS.
Requirements
- Bear app installed and running
- For some operations (add-text, tags, open-note --selected), a Bear app token (stored in `~/.config/grizzly/token`)
## Getting a Bear Token
For operations that require a token (add-text, tags, open-note --selected), you need an authentication token:
1. Open Bear → Help → API Token → Copy Token
2. Save it: `echo "YOUR_TOKEN" > ~/.config/grizzly/token`
## Common Commands
Create a note
```bash
echo "Note content here" | grizzly create --title "My Note" --tag work
grizzly create --title "Quick Note" --tag inbox < /dev/null
```
Open/read a note by ID
```bash
grizzly open-note --id "NOTE_ID" --enable-callback --json
```
Append text to a note
```bash
echo "Additional content" | grizzly add-text --id "NOTE_ID" --mode append --token-file ~/.config/grizzly/token
```
List all tags
```bash
grizzly tags --enable-callback --json --token-file ~/.config/grizzly/token
```
Search notes (via open-tag)
```bash
grizzly open-tag --name "work" --enable-callback --json
```
## Options
Common flags:
- `--dry-run` — Preview the URL without executing
- `--print-url` — Show the x-callback-url
- `--enable-callback` — Wait for Bear's response (needed for reading data)
- `--json` — Output as JSON (when using callbacks)
- `--token-file PATH` — Path to Bear API token file
## Configuration
Grizzly reads config from (in priority order):
1. CLI flags
2. Environment variables (`GRIZZLY_TOKEN_FILE`, `GRIZZLY_CALLBACK_URL`, `GRIZZLY_TIMEOUT`)
3. `.grizzly.toml` in current directory
4. `~/.config/grizzly/config.toml`
Example `~/.config/grizzly/config.toml`:
```toml
token_file = "~/.config/grizzly/token"
callback_url = "http://127.0.0.1:42123/success"
timeout = "5s"
```
## Notes
- Bear must be running for commands to work
- Note IDs are Bear's internal identifiers (visible in note info or via callbacks)
- Use `--enable-callback` when you need to read data back from Bear
- Some operations require a valid token (add-text, tags, open-note --selected)

197
.skills/bird/SKILL.md Normal file
View File

@@ -0,0 +1,197 @@
---
name: bird
description: X/Twitter CLI for reading, searching, posting, and engagement via cookies.
homepage: https://bird.fast
metadata: {"clawdbot":{"emoji":"🐦","requires":{"bins":["bird"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/bird","bins":["bird"],"label":"Install bird (brew)","os":["darwin"]},{"id":"npm","kind":"node","package":"@steipete/bird","bins":["bird"],"label":"Install bird (npm)"}]}}
---
# bird 🐦
Fast X/Twitter CLI using GraphQL + cookie auth.
## Install
```bash
# npm/pnpm/bun
npm install -g @steipete/bird
# Homebrew (macOS, prebuilt binary)
brew install steipete/tap/bird
# One-shot (no install)
bunx @steipete/bird whoami
```
## Authentication
`bird` uses cookie-based auth.
Use `--auth-token` / `--ct0` to pass cookies directly, or `--cookie-source` for browser cookies.
Run `bird check` to see which source is active. For Arc/Brave, use `--chrome-profile-dir <path>`.
## Commands
### Account & Auth
```bash
bird whoami # Show logged-in account
bird check # Show credential sources
bird query-ids --fresh # Refresh GraphQL query ID cache
```
### Reading Tweets
```bash
bird read <url-or-id> # Read a single tweet
bird <url-or-id> # Shorthand for read
bird thread <url-or-id> # Full conversation thread
bird replies <url-or-id> # List replies to a tweet
```
### Timelines
```bash
bird home # Home timeline (For You)
bird home --following # Following timeline
bird user-tweets @handle -n 20 # User's profile timeline
bird mentions # Tweets mentioning you
bird mentions --user @handle # Mentions of another user
```
### Search
```bash
bird search "query" -n 10
bird search "from:steipete" --all --max-pages 3
```
### News & Trending
```bash
bird news -n 10 # AI-curated from Explore tabs
bird news --ai-only # Filter to AI-curated only
bird news --sports # Sports tab
bird news --with-tweets # Include related tweets
bird trending # Alias for news
```
### Lists
```bash
bird lists # Your lists
bird lists --member-of # Lists you're a member of
bird list-timeline <id> -n 20 # Tweets from a list
```
### Bookmarks & Likes
```bash
bird bookmarks -n 10
bird bookmarks --folder-id <id> # Specific folder
bird bookmarks --include-parent # Include parent tweet
bird bookmarks --author-chain # Author's self-reply chain
bird bookmarks --full-chain-only # Full reply chain
bird unbookmark <url-or-id>
bird likes -n 10
```
### Social Graph
```bash
bird following -n 20 # Users you follow
bird followers -n 20 # Users following you
bird following --user <id> # Another user's following
bird about @handle # Account origin/location info
```
### Engagement Actions
```bash
bird follow @handle # Follow a user
bird unfollow @handle # Unfollow a user
```
### Posting
```bash
bird tweet "hello world"
bird reply <url-or-id> "nice thread!"
bird tweet "check this out" --media image.png --alt "description"
```
**⚠️ Posting risks**: Posting is more likely to be rate limited; if blocked, use the browser tool instead.
## Media Uploads
```bash
bird tweet "hi" --media img.png --alt "description"
bird tweet "pics" --media a.jpg --media b.jpg # Up to 4 images
bird tweet "video" --media clip.mp4 # Or 1 video
```
## Pagination
Commands supporting pagination: `replies`, `thread`, `search`, `bookmarks`, `likes`, `list-timeline`, `following`, `followers`, `user-tweets`
```bash
bird bookmarks --all # Fetch all pages
bird bookmarks --max-pages 3 # Limit pages
bird bookmarks --cursor <cursor> # Resume from cursor
bird replies <id> --all --delay 1000 # Delay between pages (ms)
```
## Output Options
```bash
--json # JSON output
--json-full # JSON with raw API response
--plain # No emoji, no color (script-friendly)
--no-emoji # Disable emoji
--no-color # Disable ANSI colors (or set NO_COLOR=1)
--quote-depth n # Max quoted tweet depth in JSON (default: 1)
```
## Global Options
```bash
--auth-token <token> # Set auth_token cookie
--ct0 <token> # Set ct0 cookie
--cookie-source <source> # Cookie source for browser cookies (repeatable)
--chrome-profile <name> # Chrome profile name
--chrome-profile-dir <path> # Chrome/Chromium profile dir or cookie DB path
--firefox-profile <name> # Firefox profile
--timeout <ms> # Request timeout
--cookie-timeout <ms> # Cookie extraction timeout
```
## Config File
`~/.config/bird/config.json5` (global) or `./.birdrc.json5` (project):
```json5
{
cookieSource: ["chrome"],
chromeProfileDir: "/path/to/Arc/Profile",
timeoutMs: 20000,
quoteDepth: 1
}
```
Environment variables: `BIRD_TIMEOUT_MS`, `BIRD_COOKIE_TIMEOUT_MS`, `BIRD_QUOTE_DEPTH`
## Troubleshooting
### Query IDs stale (404 errors)
```bash
bird query-ids --fresh
```
### Cookie extraction fails
- Check browser is logged into X
- Try different `--cookie-source`
- For Arc/Brave: use `--chrome-profile-dir`
---
**TL;DR**: Read/search/engage with CLI. Post carefully or use browser. 🐦

View File

@@ -0,0 +1,46 @@
---
name: blogwatcher
description: Monitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
homepage: https://github.com/Hyaxia/blogwatcher
metadata: {"clawdbot":{"emoji":"📰","requires":{"bins":["blogwatcher"]},"install":[{"id":"go","kind":"go","module":"github.com/Hyaxia/blogwatcher/cmd/blogwatcher@latest","bins":["blogwatcher"],"label":"Install blogwatcher (go)"}]}}
---
# blogwatcher
Track blog and RSS/Atom feed updates with the `blogwatcher` CLI.
Install
- Go: `go install github.com/Hyaxia/blogwatcher/cmd/blogwatcher@latest`
Quick start
- `blogwatcher --help`
Common commands
- Add a blog: `blogwatcher add "My Blog" https://example.com`
- List blogs: `blogwatcher blogs`
- Scan for updates: `blogwatcher scan`
- List articles: `blogwatcher articles`
- Mark an article read: `blogwatcher read 1`
- Mark all articles read: `blogwatcher read-all`
- Remove a blog: `blogwatcher remove "My Blog"`
Example output
```
$ blogwatcher blogs
Tracked blogs (1):
xkcd
URL: https://xkcd.com
```
```
$ blogwatcher scan
Scanning 1 blog(s)...
xkcd
Source: RSS | Found: 4 | New: 4
Found 4 new article(s) total!
```
Notes
- Use `blogwatcher <command> --help` to discover flags and options.

27
.skills/blucli/SKILL.md Normal file
View File

@@ -0,0 +1,27 @@
---
name: blucli
description: BluOS CLI (blu) for discovery, playback, grouping, and volume.
homepage: https://blucli.sh
metadata: {"clawdbot":{"emoji":"🫐","requires":{"bins":["blu"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/blucli/cmd/blu@latest","bins":["blu"],"label":"Install blucli (go)"}]}}
---
# blucli (blu)
Use `blu` to control Bluesound/NAD players.
Quick start
- `blu devices` (pick target)
- `blu --device <id> status`
- `blu play|pause|stop`
- `blu volume set 15`
Target selection (in priority order)
- `--device <id|name|alias>`
- `BLU_DEVICE`
- config default (if set)
Common tasks
- Grouping: `blu group status|add|remove`
- TuneIn search/play: `blu tunein search "query"`, `blu tunein play "query"`
Prefer `--json` for scripts. Confirm the target device before changing playback.

25
.skills/camsnap/SKILL.md Normal file
View File

@@ -0,0 +1,25 @@
---
name: camsnap
description: Capture frames or clips from RTSP/ONVIF cameras.
homepage: https://camsnap.ai
metadata: {"clawdbot":{"emoji":"📸","requires":{"bins":["camsnap"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/camsnap","bins":["camsnap"],"label":"Install camsnap (brew)"}]}}
---
# camsnap
Use `camsnap` to grab snapshots, clips, or motion events from configured cameras.
Setup
- Config file: `~/.config/camsnap/config.yaml`
- Add camera: `camsnap add --name kitchen --host 192.168.0.10 --user user --pass pass`
Common commands
- Discover: `camsnap discover --info`
- Snapshot: `camsnap snap kitchen --out shot.jpg`
- Clip: `camsnap clip kitchen --dur 5s --out clip.mp4`
- Motion watch: `camsnap watch kitchen --threshold 0.2 --action '...'`
- Doctor: `camsnap doctor --probe`
Notes
- Requires `ffmpeg` on PATH.
- Prefer a short test capture before longer clips.

95
.skills/cron/SKILL.md Normal file
View File

@@ -0,0 +1,95 @@
---
name: cron
description: Create and manage scheduled tasks (cron jobs) that send you messages at specified times.
---
# Cron Jobs
Schedule tasks that send you messages at specified times. Jobs are scheduled immediately when created.
## Quick Reference
```bash
lettabot-cron list # List all jobs
lettabot-cron create [options] # Create job
lettabot-cron delete ID # Delete job
lettabot-cron enable ID # Enable job
lettabot-cron disable ID # Disable job
```
## Create a Job
```bash
lettabot-cron create \
--name "Morning Briefing" \
--schedule "0 8 * * *" \
--message "Good morning! Review tasks for today." \
--deliver telegram:123456789
```
**Options:**
- `-n, --name` - Job name (required)
- `-s, --schedule` - Cron expression (required)
- `-m, --message` - Message sent to you when job runs (required)
- `-d, --deliver` - Where to send response (format: `channel:chatId`). **Defaults to last messaged chat.**
- `--disabled` - Create disabled
## Message Format
When a cron job runs, you receive a message like:
```
[cron:cron-123abc Morning Briefing] Good morning! Review tasks for today.
Current time: 1/27/2026, 8:00:00 AM (America/Los_Angeles)
```
This tells you:
- The message came from a cron job (not a user)
- The job ID and name
- The current time
## Cron Schedule Syntax
```
┌───────── minute (0-59)
│ ┌─────── hour (0-23)
│ │ ┌───── day of month (1-31)
│ │ │ ┌─── month (1-12)
│ │ │ │ ┌─ day of week (0-6, Sun=0)
* * * * *
```
| Pattern | When |
|---------|------|
| `0 8 * * *` | Daily at 8:00 AM |
| `0 9 * * 1-5` | Weekdays at 9:00 AM |
| `0 */2 * * *` | Every 2 hours |
| `30 17 * * 5` | Fridays at 5:30 PM |
| `0 0 1 * *` | First of month at midnight |
## Examples
**Daily morning check-in (delivered to Telegram):**
```bash
lettabot-cron create \
-n "Morning" \
-s "0 8 * * *" \
-m "Good morning! What's on today's agenda?" \
-d telegram:123456789
```
**Weekly review (delivered to Slack):**
```bash
lettabot-cron create \
-n "Weekly Review" \
-s "0 17 * * 5" \
-m "Friday wrap-up: What did we accomplish?" \
-d slack:C1234567890
```
## Notes
- Jobs schedule immediately when created (no restart needed)
- Use `lettabot-cron list` to see next run times and last run status
- Jobs persist in `cron-jobs.json`
- Logs written to `cron-log.jsonl`

29
.skills/eightctl/SKILL.md Normal file
View File

@@ -0,0 +1,29 @@
---
name: eightctl
description: Control Eight Sleep pods (status, temperature, alarms, schedules).
homepage: https://eightctl.sh
metadata: {"clawdbot":{"emoji":"🎛️","requires":{"bins":["eightctl"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/eightctl/cmd/eightctl@latest","bins":["eightctl"],"label":"Install eightctl (go)"}]}}
---
# eightctl
Use `eightctl` for Eight Sleep pod control. Requires auth.
Auth
- Config: `~/.config/eightctl/config.yaml`
- Env: `EIGHTCTL_EMAIL`, `EIGHTCTL_PASSWORD`
Quick start
- `eightctl status`
- `eightctl on|off`
- `eightctl temp 20`
Common tasks
- Alarms: `eightctl alarm list|create|dismiss`
- Schedules: `eightctl schedule list|create|update`
- Audio: `eightctl audio state|play|pause`
- Base: `eightctl base info|angle`
Notes
- API is unofficial and rate-limited; avoid repeated logins.
- Confirm before changing temperature or alarms.

View File

@@ -0,0 +1,41 @@
---
name: food-order
description: "Reorder Foodora orders + track ETA/status with ordercli. Never confirm without explicit user approval. Triggers: order food, reorder, track ETA."
homepage: https://ordercli.sh
metadata: {"clawdbot":{"emoji":"🥡","requires":{"bins":["ordercli"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/ordercli/cmd/ordercli@latest","bins":["ordercli"],"label":"Install ordercli (go)"}]}}
---
# Food order (Foodora via ordercli)
Goal: reorder a previous Foodora order safely (preview first; confirm only on explicit user “yes/confirm/place the order”).
Hard safety rules
- Never run `ordercli foodora reorder ... --confirm` unless user explicitly confirms placing the order.
- Prefer preview-only steps first; show what will happen; ask for confirmation.
- If user is unsure: stop at preview and ask questions.
Setup (once)
- Country: `ordercli foodora countries``ordercli foodora config set --country AT`
- Login (password): `ordercli foodora login --email you@example.com --password-stdin`
- Login (no password, preferred): `ordercli foodora session chrome --url https://www.foodora.at/ --profile "Default"`
Find what to reorder
- Recent list: `ordercli foodora history --limit 10`
- Details: `ordercli foodora history show <orderCode>`
- If needed (machine-readable): `ordercli foodora history show <orderCode> --json`
Preview reorder (no cart changes)
- `ordercli foodora reorder <orderCode>`
Place reorder (cart change; explicit confirmation required)
- Confirm first, then run: `ordercli foodora reorder <orderCode> --confirm`
- Multiple addresses? Ask user for the right `--address-id` (take from their Foodora account / prior order data) and run:
- `ordercli foodora reorder <orderCode> --confirm --address-id <id>`
Track the order
- ETA/status (active list): `ordercli foodora orders`
- Live updates: `ordercli foodora orders --watch`
- Single order detail: `ordercli foodora order <orderCode>`
Debug / safe testing
- Use a throwaway config: `ordercli --config /tmp/ordercli.json ...`

23
.skills/gemini/SKILL.md Normal file
View File

@@ -0,0 +1,23 @@
---
name: gemini
description: Gemini CLI for one-shot Q&A, summaries, and generation.
homepage: https://ai.google.dev/
metadata: {"clawdbot":{"emoji":"♊️","requires":{"bins":["gemini"]},"install":[{"id":"brew","kind":"brew","formula":"gemini-cli","bins":["gemini"],"label":"Install Gemini CLI (brew)"}]}}
---
# Gemini CLI
Use Gemini in one-shot mode with a positional prompt (avoid interactive mode).
Quick start
- `gemini "Answer this question..."`
- `gemini --model <name> "Prompt..."`
- `gemini --output-format json "Return JSON"`
Extensions
- List: `gemini --list-extensions`
- Manage: `gemini extensions <command>`
Notes
- If auth is required, run `gemini` once interactively and follow the login flow.
- Avoid `--yolo` for safety.

47
.skills/gifgrep/SKILL.md Normal file
View File

@@ -0,0 +1,47 @@
---
name: gifgrep
description: Search GIF providers with CLI/TUI, download results, and extract stills/sheets.
homepage: https://gifgrep.com
metadata: {"clawdbot":{"emoji":"🧲","requires":{"bins":["gifgrep"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/gifgrep","bins":["gifgrep"],"label":"Install gifgrep (brew)"},{"id":"go","kind":"go","module":"github.com/steipete/gifgrep/cmd/gifgrep@latest","bins":["gifgrep"],"label":"Install gifgrep (go)"}]}}
---
# gifgrep
Use `gifgrep` to search GIF providers (Tenor/Giphy), browse in a TUI, download results, and extract stills or sheets.
GIF-Grab (gifgrep workflow)
- Search → preview → download → extract (still/sheet) for fast review and sharing.
Quick start
- `gifgrep cats --max 5`
- `gifgrep cats --format url | head -n 5`
- `gifgrep search --json cats | jq '.[0].url'`
- `gifgrep tui "office handshake"`
- `gifgrep cats --download --max 1 --format url`
TUI + previews
- TUI: `gifgrep tui "query"`
- CLI still previews: `--thumbs` (Kitty/Ghostty only; still frame)
Download + reveal
- `--download` saves to `~/Downloads`
- `--reveal` shows the last download in Finder
Stills + sheets
- `gifgrep still ./clip.gif --at 1.5s -o still.png`
- `gifgrep sheet ./clip.gif --frames 9 --cols 3 -o sheet.png`
- Sheets = single PNG grid of sampled frames (great for quick review, docs, PRs, chat).
- Tune: `--frames` (count), `--cols` (grid width), `--padding` (spacing).
Providers
- `--source auto|tenor|giphy`
- `GIPHY_API_KEY` required for `--source giphy`
- `TENOR_API_KEY` optional (Tenor demo key used if unset)
Output
- `--json` prints an array of results (`id`, `title`, `url`, `preview_url`, `tags`, `width`, `height`)
- `--format` for pipe-friendly fields (e.g., `url`)
Environment tweaks
- `GIFGREP_SOFTWARE_ANIM=1` to force software animation
- `GIFGREP_CELL_ASPECT=0.5` to tweak preview geometry

48
.skills/github/SKILL.md Normal file
View File

@@ -0,0 +1,48 @@
---
name: github
description: "Interact with GitHub using the `gh` CLI. Use `gh issue`, `gh pr`, `gh run`, and `gh api` for issues, PRs, CI runs, and advanced queries."
metadata: {"clawdbot":{"emoji":"🐙","requires":{"bins":["gh"]},"install":[{"id":"brew","kind":"brew","formula":"gh","bins":["gh"],"label":"Install GitHub CLI (brew)"},{"id":"apt","kind":"apt","package":"gh","bins":["gh"],"label":"Install GitHub CLI (apt)"}]}}
---
# GitHub Skill
Use the `gh` CLI to interact with GitHub. Always specify `--repo owner/repo` when not in a git directory, or use URLs directly.
## Pull Requests
Check CI status on a PR:
```bash
gh pr checks 55 --repo owner/repo
```
List recent workflow runs:
```bash
gh run list --repo owner/repo --limit 10
```
View a run and see which steps failed:
```bash
gh run view <run-id> --repo owner/repo
```
View logs for failed steps only:
```bash
gh run view <run-id> --repo owner/repo --log-failed
```
## API for Advanced Queries
The `gh api` command is useful for accessing data not available through other subcommands.
Get PR with specific fields:
```bash
gh api repos/owner/repo/pulls/55 --jq '.title, .state, .user.login'
```
## JSON Output
Most commands support `--json` for structured output. You can use `--jq` to filter:
```bash
gh issue list --repo owner/repo --json number,title --jq '.[] | "\(.number): \(.title)"'
```

92
.skills/gog/SKILL.md Normal file
View File

@@ -0,0 +1,92 @@
---
name: gog
description: Google Workspace CLI for Gmail, Calendar, Drive, Contacts, Sheets, and Docs.
homepage: https://gogcli.sh
metadata: {"clawdbot":{"emoji":"🎮","requires":{"bins":["gog"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/gogcli","bins":["gog"],"label":"Install gog (brew)"}]}}
---
# gog
Use `gog` for Gmail/Calendar/Drive/Contacts/Sheets/Docs. Requires OAuth setup.
Setup (once)
- `gog auth credentials /path/to/client_secret.json`
- `gog auth add you@gmail.com --services gmail,calendar,drive,contacts,docs,sheets`
- `gog auth list`
Common commands
- Gmail search: `gog gmail search 'newer_than:7d' --max 10`
- Gmail messages search (per email, ignores threading): `gog gmail messages search "in:inbox from:ryanair.com" --max 20 --account you@example.com`
- Gmail send (plain): `gog gmail send --to a@b.com --subject "Hi" --body "Hello"`
- Gmail send (multi-line): `gog gmail send --to a@b.com --subject "Hi" --body-file ./message.txt`
- Gmail send (stdin): `gog gmail send --to a@b.com --subject "Hi" --body-file -`
- Gmail send (HTML): `gog gmail send --to a@b.com --subject "Hi" --body-html "<p>Hello</p>"`
- Gmail draft: `gog gmail drafts create --to a@b.com --subject "Hi" --body-file ./message.txt`
- Gmail send draft: `gog gmail drafts send <draftId>`
- Gmail reply: `gog gmail send --to a@b.com --subject "Re: Hi" --body "Reply" --reply-to-message-id <msgId>`
- Calendar list events: `gog calendar events <calendarId> --from <iso> --to <iso>`
- Calendar create event: `gog calendar create <calendarId> --summary "Title" --from <iso> --to <iso>`
- Calendar create with color: `gog calendar create <calendarId> --summary "Title" --from <iso> --to <iso> --event-color 7`
- Calendar update event: `gog calendar update <calendarId> <eventId> --summary "New Title" --event-color 4`
- Calendar show colors: `gog calendar colors`
- Drive search: `gog drive search "query" --max 10`
- Contacts: `gog contacts list --max 20`
- Sheets get: `gog sheets get <sheetId> "Tab!A1:D10" --json`
- Sheets update: `gog sheets update <sheetId> "Tab!A1:B2" --values-json '[["A","B"],["1","2"]]' --input USER_ENTERED`
- Sheets append: `gog sheets append <sheetId> "Tab!A:C" --values-json '[["x","y","z"]]' --insert INSERT_ROWS`
- Sheets clear: `gog sheets clear <sheetId> "Tab!A2:Z"`
- Sheets metadata: `gog sheets metadata <sheetId> --json`
- Docs export: `gog docs export <docId> --format txt --out /tmp/doc.txt`
- Docs cat: `gog docs cat <docId>`
Calendar Colors
- Use `gog calendar colors` to see all available event colors (IDs 1-11)
- Add colors to events with `--event-color <id>` flag
- Event color IDs (from `gog calendar colors` output):
- 1: #a4bdfc
- 2: #7ae7bf
- 3: #dbadff
- 4: #ff887c
- 5: #fbd75b
- 6: #ffb878
- 7: #46d6db
- 8: #e1e1e1
- 9: #5484ed
- 10: #51b749
- 11: #dc2127
Email Formatting
- Prefer plain text. Use `--body-file` for multi-paragraph messages (or `--body-file -` for stdin).
- Same `--body-file` pattern works for drafts and replies.
- `--body` does not unescape `\n`. If you need inline newlines, use a heredoc or `$'Line 1\n\nLine 2'`.
- Use `--body-html` only when you need rich formatting.
- HTML tags: `<p>` for paragraphs, `<br>` for line breaks, `<strong>` for bold, `<em>` for italic, `<a href="url">` for links, `<ul>`/`<li>` for lists.
- Example (plain text via stdin):
```bash
gog gmail send --to recipient@example.com \
--subject "Meeting Follow-up" \
--body-file - <<'EOF'
Hi Name,
Thanks for meeting today. Next steps:
- Item one
- Item two
Best regards,
Your Name
EOF
```
- Example (HTML list):
```bash
gog gmail send --to recipient@example.com \
--subject "Meeting Follow-up" \
--body-html "<p>Hi Name,</p><p>Thanks for meeting today. Here are the next steps:</p><ul><li>Item one</li><li>Item two</li></ul><p>Best regards,<br>Your Name</p>"
```
Notes
- Set `GOG_ACCOUNT=you@gmail.com` to avoid repeating `--account`.
- For scripting, prefer `--json` plus `--no-input`.
- Sheets values can be passed via `--values-json` (recommended) or as inline rows.
- Docs supports export/cat/copy. In-place edits require a Docs API client (not in gog).
- Confirm before sending mail or creating events.
- `gog gmail search` returns one row per thread; use `gog gmail messages search` when you need every individual email returned separately.

128
.skills/google/SKILL.md Normal file
View File

@@ -0,0 +1,128 @@
---
name: google
description: Google Workspace CLI (gog) for Gmail, Calendar, Drive, Contacts, Sheets, and Docs.
---
# Google Workspace (gog)
Use `gog` CLI to interact with Google Workspace services.
## Setup
```bash
brew install steipete/tap/gogcli
gog auth credentials /path/to/credentials.json
gog auth add you@gmail.com --services gmail,calendar,drive,contacts,docs,sheets
gog auth list
```
## Gmail
```bash
# Search emails
gog gmail search 'newer_than:1h is:unread' --account EMAIL --max 10
gog gmail search 'from:someone@example.com' --account EMAIL --max 10
# Read email
gog gmail get MESSAGE_ID --account EMAIL
# Send email
gog gmail send --to recipient@example.com --subject "Subject" --body "Message" --account EMAIL
# Reply to thread
gog gmail send --to recipient@example.com --subject "Re: Original" --body "Reply" --reply-to-message-id MSG_ID --account EMAIL
# Create/send draft
gog gmail drafts create --to recipient@example.com --subject "Subject" --body "Draft" --account EMAIL
gog gmail drafts send DRAFT_ID --account EMAIL
# Manage labels
gog gmail labels --account EMAIL
gog gmail modify MESSAGE_ID --add-labels LABEL --account EMAIL
gog gmail modify MESSAGE_ID --remove-labels UNREAD --account EMAIL
```
## Calendar
```bash
# List events
gog calendar events CALENDAR_ID --from 2026-01-27T00:00:00Z --to 2026-01-28T00:00:00Z --account EMAIL
# Create event
gog calendar create CALENDAR_ID --summary "Meeting" --from 2026-01-27T10:00:00Z --to 2026-01-27T11:00:00Z --account EMAIL
# Create with color (1-11)
gog calendar create CALENDAR_ID --summary "Meeting" --from ISO --to ISO --event-color 7 --account EMAIL
# Update event
gog calendar update CALENDAR_ID EVENT_ID --summary "New Title" --account EMAIL
# Show available colors
gog calendar colors
```
## Drive
```bash
# Search files
gog drive search "query" --max 10 --account EMAIL
# List files in folder
gog drive list FOLDER_ID --account EMAIL
# Download file
gog drive download FILE_ID --out /path/to/file --account EMAIL
# Upload file
gog drive upload /path/to/file --parent FOLDER_ID --account EMAIL
```
## Contacts
```bash
# List contacts
gog contacts list --max 20 --account EMAIL
# Search contacts
gog contacts search "name" --account EMAIL
```
## Sheets
```bash
# Read range
gog sheets get SHEET_ID "Sheet1!A1:D10" --json --account EMAIL
# Update cells
gog sheets update SHEET_ID "Sheet1!A1:B2" --values-json '[["A","B"],["1","2"]]' --input USER_ENTERED --account EMAIL
# Append rows
gog sheets append SHEET_ID "Sheet1!A:C" --values-json '[["x","y","z"]]' --insert INSERT_ROWS --account EMAIL
# Clear range
gog sheets clear SHEET_ID "Sheet1!A2:Z" --account EMAIL
# Get metadata
gog sheets metadata SHEET_ID --json --account EMAIL
```
## Docs
```bash
# Read document
gog docs cat DOC_ID --account EMAIL
# Export to file
gog docs export DOC_ID --format txt --out /tmp/doc.txt --account EMAIL
```
## Environment
Set default account in `.env`:
```bash
GMAIL_ACCOUNT=you@gmail.com
```
## Email Polling
Emails are polled every 1 minute via cron. Use `ignore()` if nothing important.

30
.skills/goplaces/SKILL.md Normal file
View File

@@ -0,0 +1,30 @@
---
name: goplaces
description: Query Google Places API (New) via the goplaces CLI for text search, place details, resolve, and reviews. Use for human-friendly place lookup or JSON output for scripts.
homepage: https://github.com/steipete/goplaces
metadata: {"clawdbot":{"emoji":"📍","requires":{"bins":["goplaces"],"env":["GOOGLE_PLACES_API_KEY"]},"primaryEnv":"GOOGLE_PLACES_API_KEY","install":[{"id":"brew","kind":"brew","formula":"steipete/tap/goplaces","bins":["goplaces"],"label":"Install goplaces (brew)"}]}}
---
# goplaces
Modern Google Places API (New) CLI. Human output by default, `--json` for scripts.
Install
- Homebrew: `brew install steipete/tap/goplaces`
Config
- `GOOGLE_PLACES_API_KEY` required.
- Optional: `GOOGLE_PLACES_BASE_URL` for testing/proxying.
Common commands
- Search: `goplaces search "coffee" --open-now --min-rating 4 --limit 5`
- Bias: `goplaces search "pizza" --lat 40.8 --lng -73.9 --radius-m 3000`
- Pagination: `goplaces search "pizza" --page-token "NEXT_PAGE_TOKEN"`
- Resolve: `goplaces resolve "Soho, London" --limit 5`
- Details: `goplaces details <place_id> --reviews`
- JSON: `goplaces search "sushi" --json`
Notes
- `--no-color` or `NO_COLOR` disables ANSI color.
- Price levels: 0..4 (free → very expensive).
- Type filter sends only the first `--type` value (API accepts one).

217
.skills/himalaya/SKILL.md Normal file
View File

@@ -0,0 +1,217 @@
---
name: himalaya
description: "CLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language)."
homepage: https://github.com/pimalaya/himalaya
metadata: {"clawdbot":{"emoji":"📧","requires":{"bins":["himalaya"]},"install":[{"id":"brew","kind":"brew","formula":"himalaya","bins":["himalaya"],"label":"Install Himalaya (brew)"}]}}
---
# Himalaya Email CLI
Himalaya is a CLI email client that lets you manage emails from the terminal using IMAP, SMTP, Notmuch, or Sendmail backends.
## References
- `references/configuration.md` (config file setup + IMAP/SMTP authentication)
- `references/message-composition.md` (MML syntax for composing emails)
## Prerequisites
1. Himalaya CLI installed (`himalaya --version` to verify)
2. A configuration file at `~/.config/himalaya/config.toml`
3. IMAP/SMTP credentials configured (password stored securely)
## Configuration Setup
Run the interactive wizard to set up an account:
```bash
himalaya account configure
```
Or create `~/.config/himalaya/config.toml` manually:
```toml
[accounts.personal]
email = "you@example.com"
display-name = "Your Name"
default = true
backend.type = "imap"
backend.host = "imap.example.com"
backend.port = 993
backend.encryption.type = "tls"
backend.login = "you@example.com"
backend.auth.type = "password"
backend.auth.cmd = "pass show email/imap" # or use keyring
message.send.backend.type = "smtp"
message.send.backend.host = "smtp.example.com"
message.send.backend.port = 587
message.send.backend.encryption.type = "start-tls"
message.send.backend.login = "you@example.com"
message.send.backend.auth.type = "password"
message.send.backend.auth.cmd = "pass show email/smtp"
```
## Common Operations
### List Folders
```bash
himalaya folder list
```
### List Emails
List emails in INBOX (default):
```bash
himalaya envelope list
```
List emails in a specific folder:
```bash
himalaya envelope list --folder "Sent"
```
List with pagination:
```bash
himalaya envelope list --page 1 --page-size 20
```
### Search Emails
```bash
himalaya envelope list from john@example.com subject meeting
```
### Read an Email
Read email by ID (shows plain text):
```bash
himalaya message read 42
```
Export raw MIME:
```bash
himalaya message export 42 --full
```
### Reply to an Email
Interactive reply (opens $EDITOR):
```bash
himalaya message reply 42
```
Reply-all:
```bash
himalaya message reply 42 --all
```
### Forward an Email
```bash
himalaya message forward 42
```
### Write a New Email
Interactive compose (opens $EDITOR):
```bash
himalaya message write
```
Send directly using template:
```bash
cat << 'EOF' | himalaya template send
From: you@example.com
To: recipient@example.com
Subject: Test Message
Hello from Himalaya!
EOF
```
Or with headers flag:
```bash
himalaya message write -H "To:recipient@example.com" -H "Subject:Test" "Message body here"
```
### Move/Copy Emails
Move to folder:
```bash
himalaya message move 42 "Archive"
```
Copy to folder:
```bash
himalaya message copy 42 "Important"
```
### Delete an Email
```bash
himalaya message delete 42
```
### Manage Flags
Add flag:
```bash
himalaya flag add 42 --flag seen
```
Remove flag:
```bash
himalaya flag remove 42 --flag seen
```
## Multiple Accounts
List accounts:
```bash
himalaya account list
```
Use a specific account:
```bash
himalaya --account work envelope list
```
## Attachments
Save attachments from a message:
```bash
himalaya attachment download 42
```
Save to specific directory:
```bash
himalaya attachment download 42 --dir ~/Downloads
```
## Output Formats
Most commands support `--output` for structured output:
```bash
himalaya envelope list --output json
himalaya envelope list --output plain
```
## Debugging
Enable debug logging:
```bash
RUST_LOG=debug himalaya envelope list
```
Full trace with backtrace:
```bash
RUST_LOG=trace RUST_BACKTRACE=1 himalaya envelope list
```
## Tips
- Use `himalaya --help` or `himalaya <command> --help` for detailed usage.
- Message IDs are relative to the current folder; re-list after folder changes.
- For composing rich emails with attachments, use MML syntax (see `references/message-composition.md`).
- Store passwords securely using `pass`, system keyring, or a command that outputs the password.

View File

@@ -0,0 +1,174 @@
# Himalaya Configuration Reference
Configuration file location: `~/.config/himalaya/config.toml`
## Minimal IMAP + SMTP Setup
```toml
[accounts.default]
email = "user@example.com"
display-name = "Your Name"
default = true
# IMAP backend for reading emails
backend.type = "imap"
backend.host = "imap.example.com"
backend.port = 993
backend.encryption.type = "tls"
backend.login = "user@example.com"
backend.auth.type = "password"
backend.auth.raw = "your-password"
# SMTP backend for sending emails
message.send.backend.type = "smtp"
message.send.backend.host = "smtp.example.com"
message.send.backend.port = 587
message.send.backend.encryption.type = "start-tls"
message.send.backend.login = "user@example.com"
message.send.backend.auth.type = "password"
message.send.backend.auth.raw = "your-password"
```
## Password Options
### Raw password (testing only, not recommended)
```toml
backend.auth.raw = "your-password"
```
### Password from command (recommended)
```toml
backend.auth.cmd = "pass show email/imap"
# backend.auth.cmd = "security find-generic-password -a user@example.com -s imap -w"
```
### System keyring (requires keyring feature)
```toml
backend.auth.keyring = "imap-example"
```
Then run `himalaya account configure <account>` to store the password.
## Gmail Configuration
```toml
[accounts.gmail]
email = "you@gmail.com"
display-name = "Your Name"
default = true
backend.type = "imap"
backend.host = "imap.gmail.com"
backend.port = 993
backend.encryption.type = "tls"
backend.login = "you@gmail.com"
backend.auth.type = "password"
backend.auth.cmd = "pass show google/app-password"
message.send.backend.type = "smtp"
message.send.backend.host = "smtp.gmail.com"
message.send.backend.port = 587
message.send.backend.encryption.type = "start-tls"
message.send.backend.login = "you@gmail.com"
message.send.backend.auth.type = "password"
message.send.backend.auth.cmd = "pass show google/app-password"
```
**Note:** Gmail requires an App Password if 2FA is enabled.
## iCloud Configuration
```toml
[accounts.icloud]
email = "you@icloud.com"
display-name = "Your Name"
backend.type = "imap"
backend.host = "imap.mail.me.com"
backend.port = 993
backend.encryption.type = "tls"
backend.login = "you@icloud.com"
backend.auth.type = "password"
backend.auth.cmd = "pass show icloud/app-password"
message.send.backend.type = "smtp"
message.send.backend.host = "smtp.mail.me.com"
message.send.backend.port = 587
message.send.backend.encryption.type = "start-tls"
message.send.backend.login = "you@icloud.com"
message.send.backend.auth.type = "password"
message.send.backend.auth.cmd = "pass show icloud/app-password"
```
**Note:** Generate an app-specific password at appleid.apple.com
## Folder Aliases
Map custom folder names:
```toml
[accounts.default.folder.alias]
inbox = "INBOX"
sent = "Sent"
drafts = "Drafts"
trash = "Trash"
```
## Multiple Accounts
```toml
[accounts.personal]
email = "personal@example.com"
default = true
# ... backend config ...
[accounts.work]
email = "work@company.com"
# ... backend config ...
```
Switch accounts with `--account`:
```bash
himalaya --account work envelope list
```
## Notmuch Backend (local mail)
```toml
[accounts.local]
email = "user@example.com"
backend.type = "notmuch"
backend.db-path = "~/.mail/.notmuch"
```
## OAuth2 Authentication (for providers that support it)
```toml
backend.auth.type = "oauth2"
backend.auth.client-id = "your-client-id"
backend.auth.client-secret.cmd = "pass show oauth/client-secret"
backend.auth.access-token.cmd = "pass show oauth/access-token"
backend.auth.refresh-token.cmd = "pass show oauth/refresh-token"
backend.auth.auth-url = "https://provider.com/oauth/authorize"
backend.auth.token-url = "https://provider.com/oauth/token"
```
## Additional Options
### Signature
```toml
[accounts.default]
signature = "Best regards,\nYour Name"
signature-delim = "-- \n"
```
### Downloads directory
```toml
[accounts.default]
downloads-dir = "~/Downloads/himalaya"
```
### Editor for composing
Set via environment variable:
```bash
export EDITOR="vim"
```

View File

@@ -0,0 +1,182 @@
# Message Composition with MML (MIME Meta Language)
Himalaya uses MML for composing emails. MML is a simple XML-based syntax that compiles to MIME messages.
## Basic Message Structure
An email message is a list of **headers** followed by a **body**, separated by a blank line:
```
From: sender@example.com
To: recipient@example.com
Subject: Hello World
This is the message body.
```
## Headers
Common headers:
- `From`: Sender address
- `To`: Primary recipient(s)
- `Cc`: Carbon copy recipients
- `Bcc`: Blind carbon copy recipients
- `Subject`: Message subject
- `Reply-To`: Address for replies (if different from From)
- `In-Reply-To`: Message ID being replied to
### Address Formats
```
To: user@example.com
To: John Doe <john@example.com>
To: "John Doe" <john@example.com>
To: user1@example.com, user2@example.com, "Jane" <jane@example.com>
```
## Plain Text Body
Simple plain text email:
```
From: alice@localhost
To: bob@localhost
Subject: Plain Text Example
Hello, this is a plain text email.
No special formatting needed.
Best,
Alice
```
## MML for Rich Emails
### Multipart Messages
Alternative text/html parts:
```
From: alice@localhost
To: bob@localhost
Subject: Multipart Example
<#multipart type=alternative>
This is the plain text version.
<#part type=text/html>
<html><body><h1>This is the HTML version</h1></body></html>
<#/multipart>
```
### Attachments
Attach a file:
```
From: alice@localhost
To: bob@localhost
Subject: With Attachment
Here is the document you requested.
<#part filename=/path/to/document.pdf><#/part>
```
Attachment with custom name:
```
<#part filename=/path/to/file.pdf name=report.pdf><#/part>
```
Multiple attachments:
```
<#part filename=/path/to/doc1.pdf><#/part>
<#part filename=/path/to/doc2.pdf><#/part>
```
### Inline Images
Embed an image inline:
```
From: alice@localhost
To: bob@localhost
Subject: Inline Image
<#multipart type=related>
<#part type=text/html>
<html><body>
<p>Check out this image:</p>
<img src="cid:image1">
</body></html>
<#part disposition=inline id=image1 filename=/path/to/image.png><#/part>
<#/multipart>
```
### Mixed Content (Text + Attachments)
```
From: alice@localhost
To: bob@localhost
Subject: Mixed Content
<#multipart type=mixed>
<#part type=text/plain>
Please find the attached files.
Best,
Alice
<#part filename=/path/to/file1.pdf><#/part>
<#part filename=/path/to/file2.zip><#/part>
<#/multipart>
```
## MML Tag Reference
### `<#multipart>`
Groups multiple parts together.
- `type=alternative`: Different representations of same content
- `type=mixed`: Independent parts (text + attachments)
- `type=related`: Parts that reference each other (HTML + images)
### `<#part>`
Defines a message part.
- `type=<mime-type>`: Content type (e.g., `text/html`, `application/pdf`)
- `filename=<path>`: File to attach
- `name=<name>`: Display name for attachment
- `disposition=inline`: Display inline instead of as attachment
- `id=<cid>`: Content ID for referencing in HTML
## Composing from CLI
### Interactive compose
Opens your `$EDITOR`:
```bash
himalaya message write
```
### Reply (opens editor with quoted message)
```bash
himalaya message reply 42
himalaya message reply 42 --all # reply-all
```
### Forward
```bash
himalaya message forward 42
```
### Send from stdin
```bash
cat message.txt | himalaya template send
```
### Prefill headers from CLI
```bash
himalaya message write \
-H "To:recipient@example.com" \
-H "Subject:Quick Message" \
"Message body here"
```
## Tips
- The editor opens with a template; fill in headers and body.
- Save and exit the editor to send; exit without saving to cancel.
- MML parts are compiled to proper MIME when sending.
- Use `himalaya message export --full` to inspect the raw MIME structure of received emails.

25
.skills/imsg/SKILL.md Normal file
View File

@@ -0,0 +1,25 @@
---
name: imsg
description: iMessage/SMS CLI for listing chats, history, watch, and sending.
homepage: https://imsg.to
metadata: {"clawdbot":{"emoji":"📨","os":["darwin"],"requires":{"bins":["imsg"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/imsg","bins":["imsg"],"label":"Install imsg (brew)"}]}}
---
# imsg
Use `imsg` to read and send Messages.app iMessage/SMS on macOS.
Requirements
- Messages.app signed in
- Full Disk Access for your terminal
- Automation permission to control Messages.app (for sending)
Common commands
- List chats: `imsg chats --limit 10 --json`
- History: `imsg history --chat-id 1 --limit 20 --attachments --json`
- Watch: `imsg watch --chat-id 1 --attachments`
- Send: `imsg send --to "+14155551212" --text "hi" --file /path/pic.jpg`
Notes
- `--service imessage|sms|auto` controls delivery.
- Confirm recipient + message before sending.

View File

@@ -0,0 +1,101 @@
# Local Places
This repo is a fusion of two pieces:
- A FastAPI server that exposes endpoints for searching and resolving places via the Google Maps Places API.
- A companion agent skill that explains how to use the API and can call it to find places efficiently.
Together, the skill and server let an agent turn natural-language place queries into structured results quickly.
## Run locally
```bash
# copy skill definition into the relevant folder (where the agent looks for it)
# then run the server
uv venv
uv pip install -e ".[dev]"
uv run --env-file .env uvicorn local_places.main:app --host 0.0.0.0 --reload
```
Open the API docs at http://127.0.0.1:8000/docs.
## Places API
Set the Google Places API key before running:
```bash
export GOOGLE_PLACES_API_KEY="your-key"
```
Endpoints:
- `POST /places/search` (free-text query + filters)
- `GET /places/{place_id}` (place details)
- `POST /locations/resolve` (resolve a user-provided location string)
Example search request:
```json
{
"query": "italian restaurant",
"filters": {
"types": ["restaurant"],
"open_now": true,
"min_rating": 4.0,
"price_levels": [1, 2]
},
"limit": 10
}
```
Notes:
- `filters.types` supports a single type (mapped to Google `includedType`).
Example search request (curl):
```bash
curl -X POST http://127.0.0.1:8000/places/search \
-H "Content-Type: application/json" \
-d '{
"query": "italian restaurant",
"location_bias": {
"lat": 40.8065,
"lng": -73.9719,
"radius_m": 3000
},
"filters": {
"types": ["restaurant"],
"open_now": true,
"min_rating": 4.0,
"price_levels": [1, 2, 3]
},
"limit": 10
}'
```
Example resolve request (curl):
```bash
curl -X POST http://127.0.0.1:8000/locations/resolve \
-H "Content-Type: application/json" \
-d '{
"location_text": "Riverside Park, New York",
"limit": 5
}'
```
## Test
```bash
uv run pytest
```
## OpenAPI
Generate the OpenAPI schema:
```bash
uv run python scripts/generate_openapi.py
```

View File

@@ -0,0 +1,91 @@
---
name: local-places
description: Search for places (restaurants, cafes, etc.) via Google Places API proxy on localhost.
homepage: https://github.com/Hyaxia/local_places
metadata: {"clawdbot":{"emoji":"📍","requires":{"bins":["uv"],"env":["GOOGLE_PLACES_API_KEY"]},"primaryEnv":"GOOGLE_PLACES_API_KEY"}}
---
# 📍 Local Places
*Find places, Go fast*
Search for nearby places using a local Google Places API proxy. Two-step flow: resolve location first, then search.
## Setup
```bash
cd {baseDir}
echo "GOOGLE_PLACES_API_KEY=your-key" > .env
uv venv && uv pip install -e ".[dev]"
uv run --env-file .env uvicorn local_places.main:app --host 127.0.0.1 --port 8000
```
Requires `GOOGLE_PLACES_API_KEY` in `.env` or environment.
## Quick Start
1. **Check server:** `curl http://127.0.0.1:8000/ping`
2. **Resolve location:**
```bash
curl -X POST http://127.0.0.1:8000/locations/resolve \
-H "Content-Type: application/json" \
-d '{"location_text": "Soho, London", "limit": 5}'
```
3. **Search places:**
```bash
curl -X POST http://127.0.0.1:8000/places/search \
-H "Content-Type: application/json" \
-d '{
"query": "coffee shop",
"location_bias": {"lat": 51.5137, "lng": -0.1366, "radius_m": 1000},
"filters": {"open_now": true, "min_rating": 4.0},
"limit": 10
}'
```
4. **Get details:**
```bash
curl http://127.0.0.1:8000/places/{place_id}
```
## Conversation Flow
1. If user says "near me" or gives vague location → resolve it first
2. If multiple results → show numbered list, ask user to pick
3. Ask for preferences: type, open now, rating, price level
4. Search with `location_bias` from chosen location
5. Present results with name, rating, address, open status
6. Offer to fetch details or refine search
## Filter Constraints
- `filters.types`: exactly ONE type (e.g., "restaurant", "cafe", "gym")
- `filters.price_levels`: integers 0-4 (0=free, 4=very expensive)
- `filters.min_rating`: 0-5 in 0.5 increments
- `filters.open_now`: boolean
- `limit`: 1-20 for search, 1-10 for resolve
- `location_bias.radius_m`: must be > 0
## Response Format
```json
{
"results": [
{
"place_id": "ChIJ...",
"name": "Coffee Shop",
"address": "123 Main St",
"location": {"lat": 51.5, "lng": -0.1},
"rating": 4.6,
"price_level": 2,
"types": ["cafe", "food"],
"open_now": true
}
],
"next_page_token": "..."
}
```
Use `next_page_token` as `page_token` in next request for more results.

View File

@@ -0,0 +1,27 @@
[project]
name = "my-api"
version = "0.1.0"
description = "FastAPI server"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"fastapi>=0.110.0",
"httpx>=0.27.0",
"uvicorn[standard]>=0.29.0",
]
[project.optional-dependencies]
dev = [
"pytest>=8.0.0",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/local_places"]
[tool.pytest.ini_options]
addopts = "-q"
testpaths = ["tests"]

View File

@@ -0,0 +1,2 @@
__all__ = ["__version__"]
__version__ = "0.1.0"

View File

@@ -0,0 +1,314 @@
from __future__ import annotations
import logging
import os
from typing import Any
import httpx
from fastapi import HTTPException
from local_places.schemas import (
LatLng,
LocationResolveRequest,
LocationResolveResponse,
PlaceDetails,
PlaceSummary,
ResolvedLocation,
SearchRequest,
SearchResponse,
)
GOOGLE_PLACES_BASE_URL = os.getenv(
"GOOGLE_PLACES_BASE_URL", "https://places.googleapis.com/v1"
)
logger = logging.getLogger("local_places.google_places")
_PRICE_LEVEL_TO_ENUM = {
0: "PRICE_LEVEL_FREE",
1: "PRICE_LEVEL_INEXPENSIVE",
2: "PRICE_LEVEL_MODERATE",
3: "PRICE_LEVEL_EXPENSIVE",
4: "PRICE_LEVEL_VERY_EXPENSIVE",
}
_ENUM_TO_PRICE_LEVEL = {value: key for key, value in _PRICE_LEVEL_TO_ENUM.items()}
_SEARCH_FIELD_MASK = (
"places.id,"
"places.displayName,"
"places.formattedAddress,"
"places.location,"
"places.rating,"
"places.priceLevel,"
"places.types,"
"places.currentOpeningHours,"
"nextPageToken"
)
_DETAILS_FIELD_MASK = (
"id,"
"displayName,"
"formattedAddress,"
"location,"
"rating,"
"priceLevel,"
"types,"
"regularOpeningHours,"
"currentOpeningHours,"
"nationalPhoneNumber,"
"websiteUri"
)
_RESOLVE_FIELD_MASK = (
"places.id,"
"places.displayName,"
"places.formattedAddress,"
"places.location,"
"places.types"
)
class _GoogleResponse:
def __init__(self, response: httpx.Response):
self.status_code = response.status_code
self._response = response
def json(self) -> dict[str, Any]:
return self._response.json()
@property
def text(self) -> str:
return self._response.text
def _api_headers(field_mask: str) -> dict[str, str]:
api_key = os.getenv("GOOGLE_PLACES_API_KEY")
if not api_key:
raise HTTPException(
status_code=500,
detail="GOOGLE_PLACES_API_KEY is not set.",
)
return {
"Content-Type": "application/json",
"X-Goog-Api-Key": api_key,
"X-Goog-FieldMask": field_mask,
}
def _request(
method: str, url: str, payload: dict[str, Any] | None, field_mask: str
) -> _GoogleResponse:
try:
with httpx.Client(timeout=10.0) as client:
response = client.request(
method=method,
url=url,
headers=_api_headers(field_mask),
json=payload,
)
except httpx.HTTPError as exc:
raise HTTPException(status_code=502, detail="Google Places API unavailable.") from exc
return _GoogleResponse(response)
def _build_text_query(request: SearchRequest) -> str:
keyword = request.filters.keyword if request.filters else None
if keyword:
return f"{request.query} {keyword}".strip()
return request.query
def _build_search_body(request: SearchRequest) -> dict[str, Any]:
body: dict[str, Any] = {
"textQuery": _build_text_query(request),
"pageSize": request.limit,
}
if request.page_token:
body["pageToken"] = request.page_token
if request.location_bias:
body["locationBias"] = {
"circle": {
"center": {
"latitude": request.location_bias.lat,
"longitude": request.location_bias.lng,
},
"radius": request.location_bias.radius_m,
}
}
if request.filters:
filters = request.filters
if filters.types:
body["includedType"] = filters.types[0]
if filters.open_now is not None:
body["openNow"] = filters.open_now
if filters.min_rating is not None:
body["minRating"] = filters.min_rating
if filters.price_levels:
body["priceLevels"] = [
_PRICE_LEVEL_TO_ENUM[level] for level in filters.price_levels
]
return body
def _parse_lat_lng(raw: dict[str, Any] | None) -> LatLng | None:
if not raw:
return None
latitude = raw.get("latitude")
longitude = raw.get("longitude")
if latitude is None or longitude is None:
return None
return LatLng(lat=latitude, lng=longitude)
def _parse_display_name(raw: dict[str, Any] | None) -> str | None:
if not raw:
return None
return raw.get("text")
def _parse_open_now(raw: dict[str, Any] | None) -> bool | None:
if not raw:
return None
return raw.get("openNow")
def _parse_hours(raw: dict[str, Any] | None) -> list[str] | None:
if not raw:
return None
return raw.get("weekdayDescriptions")
def _parse_price_level(raw: str | None) -> int | None:
if not raw:
return None
return _ENUM_TO_PRICE_LEVEL.get(raw)
def search_places(request: SearchRequest) -> SearchResponse:
url = f"{GOOGLE_PLACES_BASE_URL}/places:searchText"
response = _request("POST", url, _build_search_body(request), _SEARCH_FIELD_MASK)
if response.status_code >= 400:
logger.error(
"Google Places API error %s. response=%s",
response.status_code,
response.text,
)
raise HTTPException(
status_code=502,
detail=f"Google Places API error ({response.status_code}).",
)
try:
payload = response.json()
except ValueError as exc:
logger.error(
"Google Places API returned invalid JSON. response=%s",
response.text,
)
raise HTTPException(status_code=502, detail="Invalid Google response.") from exc
places = payload.get("places", [])
results = []
for place in places:
results.append(
PlaceSummary(
place_id=place.get("id", ""),
name=_parse_display_name(place.get("displayName")),
address=place.get("formattedAddress"),
location=_parse_lat_lng(place.get("location")),
rating=place.get("rating"),
price_level=_parse_price_level(place.get("priceLevel")),
types=place.get("types"),
open_now=_parse_open_now(place.get("currentOpeningHours")),
)
)
return SearchResponse(
results=results,
next_page_token=payload.get("nextPageToken"),
)
def get_place_details(place_id: str) -> PlaceDetails:
url = f"{GOOGLE_PLACES_BASE_URL}/places/{place_id}"
response = _request("GET", url, None, _DETAILS_FIELD_MASK)
if response.status_code >= 400:
logger.error(
"Google Places API error %s. response=%s",
response.status_code,
response.text,
)
raise HTTPException(
status_code=502,
detail=f"Google Places API error ({response.status_code}).",
)
try:
payload = response.json()
except ValueError as exc:
logger.error(
"Google Places API returned invalid JSON. response=%s",
response.text,
)
raise HTTPException(status_code=502, detail="Invalid Google response.") from exc
return PlaceDetails(
place_id=payload.get("id", place_id),
name=_parse_display_name(payload.get("displayName")),
address=payload.get("formattedAddress"),
location=_parse_lat_lng(payload.get("location")),
rating=payload.get("rating"),
price_level=_parse_price_level(payload.get("priceLevel")),
types=payload.get("types"),
phone=payload.get("nationalPhoneNumber"),
website=payload.get("websiteUri"),
hours=_parse_hours(payload.get("regularOpeningHours")),
open_now=_parse_open_now(payload.get("currentOpeningHours")),
)
def resolve_locations(request: LocationResolveRequest) -> LocationResolveResponse:
url = f"{GOOGLE_PLACES_BASE_URL}/places:searchText"
body = {"textQuery": request.location_text, "pageSize": request.limit}
response = _request("POST", url, body, _RESOLVE_FIELD_MASK)
if response.status_code >= 400:
logger.error(
"Google Places API error %s. response=%s",
response.status_code,
response.text,
)
raise HTTPException(
status_code=502,
detail=f"Google Places API error ({response.status_code}).",
)
try:
payload = response.json()
except ValueError as exc:
logger.error(
"Google Places API returned invalid JSON. response=%s",
response.text,
)
raise HTTPException(status_code=502, detail="Invalid Google response.") from exc
places = payload.get("places", [])
results = []
for place in places:
results.append(
ResolvedLocation(
place_id=place.get("id", ""),
name=_parse_display_name(place.get("displayName")),
address=place.get("formattedAddress"),
location=_parse_lat_lng(place.get("location")),
types=place.get("types"),
)
)
return LocationResolveResponse(results=results)

View File

@@ -0,0 +1,65 @@
import logging
import os
from fastapi import FastAPI, Request
from fastapi.encoders import jsonable_encoder
from fastapi.exceptions import RequestValidationError
from fastapi.responses import JSONResponse
from local_places.google_places import get_place_details, resolve_locations, search_places
from local_places.schemas import (
LocationResolveRequest,
LocationResolveResponse,
PlaceDetails,
SearchRequest,
SearchResponse,
)
app = FastAPI(
title="My API",
servers=[{"url": os.getenv("OPENAPI_SERVER_URL", "http://maxims-macbook-air:8000")}],
)
logger = logging.getLogger("local_places.validation")
@app.get("/ping")
def ping() -> dict[str, str]:
return {"message": "pong"}
@app.exception_handler(RequestValidationError)
async def validation_exception_handler(
request: Request, exc: RequestValidationError
) -> JSONResponse:
logger.error(
"Validation error on %s %s. body=%s errors=%s",
request.method,
request.url.path,
exc.body,
exc.errors(),
)
return JSONResponse(
status_code=422,
content=jsonable_encoder({"detail": exc.errors()}),
)
@app.post("/places/search", response_model=SearchResponse)
def places_search(request: SearchRequest) -> SearchResponse:
return search_places(request)
@app.get("/places/{place_id}", response_model=PlaceDetails)
def places_details(place_id: str) -> PlaceDetails:
return get_place_details(place_id)
@app.post("/locations/resolve", response_model=LocationResolveResponse)
def locations_resolve(request: LocationResolveRequest) -> LocationResolveResponse:
return resolve_locations(request)
if __name__ == "__main__":
import uvicorn
uvicorn.run("local_places.main:app", host="0.0.0.0", port=8000)

View File

@@ -0,0 +1,107 @@
from __future__ import annotations
from pydantic import BaseModel, Field, field_validator
class LatLng(BaseModel):
lat: float = Field(ge=-90, le=90)
lng: float = Field(ge=-180, le=180)
class LocationBias(BaseModel):
lat: float = Field(ge=-90, le=90)
lng: float = Field(ge=-180, le=180)
radius_m: float = Field(gt=0)
class Filters(BaseModel):
types: list[str] | None = None
open_now: bool | None = None
min_rating: float | None = Field(default=None, ge=0, le=5)
price_levels: list[int] | None = None
keyword: str | None = Field(default=None, min_length=1)
@field_validator("types")
@classmethod
def validate_types(cls, value: list[str] | None) -> list[str] | None:
if value is None:
return value
if len(value) > 1:
raise ValueError(
"Only one type is supported. Use query/keyword for additional filtering."
)
return value
@field_validator("price_levels")
@classmethod
def validate_price_levels(cls, value: list[int] | None) -> list[int] | None:
if value is None:
return value
invalid = [level for level in value if level not in range(0, 5)]
if invalid:
raise ValueError("price_levels must be integers between 0 and 4.")
return value
@field_validator("min_rating")
@classmethod
def validate_min_rating(cls, value: float | None) -> float | None:
if value is None:
return value
if (value * 2) % 1 != 0:
raise ValueError("min_rating must be in 0.5 increments.")
return value
class SearchRequest(BaseModel):
query: str = Field(min_length=1)
location_bias: LocationBias | None = None
filters: Filters | None = None
limit: int = Field(default=10, ge=1, le=20)
page_token: str | None = None
class PlaceSummary(BaseModel):
place_id: str
name: str | None = None
address: str | None = None
location: LatLng | None = None
rating: float | None = None
price_level: int | None = None
types: list[str] | None = None
open_now: bool | None = None
class SearchResponse(BaseModel):
results: list[PlaceSummary]
next_page_token: str | None = None
class LocationResolveRequest(BaseModel):
location_text: str = Field(min_length=1)
limit: int = Field(default=5, ge=1, le=10)
class ResolvedLocation(BaseModel):
place_id: str
name: str | None = None
address: str | None = None
location: LatLng | None = None
types: list[str] | None = None
class LocationResolveResponse(BaseModel):
results: list[ResolvedLocation]
class PlaceDetails(BaseModel):
place_id: str
name: str | None = None
address: str | None = None
location: LatLng | None = None
rating: float | None = None
price_level: int | None = None
types: list[str] | None = None
phone: str | None = None
website: str | None = None
hours: list[str] | None = None
open_now: bool | None = None

38
.skills/mcporter/SKILL.md Normal file
View File

@@ -0,0 +1,38 @@
---
name: mcporter
description: Use the mcporter CLI to list, configure, auth, and call MCP servers/tools directly (HTTP or stdio), including ad-hoc servers, config edits, and CLI/type generation.
homepage: http://mcporter.dev
metadata: {"clawdbot":{"emoji":"📦","requires":{"bins":["mcporter"]},"install":[{"id":"node","kind":"node","package":"mcporter","bins":["mcporter"],"label":"Install mcporter (node)"}]}}
---
# mcporter
Use `mcporter` to work with MCP servers directly.
Quick start
- `mcporter list`
- `mcporter list <server> --schema`
- `mcporter call <server.tool> key=value`
Call tools
- Selector: `mcporter call linear.list_issues team=ENG limit:5`
- Function syntax: `mcporter call "linear.create_issue(title: \"Bug\")"`
- Full URL: `mcporter call https://api.example.com/mcp.fetch url:https://example.com`
- Stdio: `mcporter call --stdio "bun run ./server.ts" scrape url=https://example.com`
- JSON payload: `mcporter call <server.tool> --args '{"limit":5}'`
Auth + config
- OAuth: `mcporter auth <server | url> [--reset]`
- Config: `mcporter config list|get|add|remove|import|login|logout`
Daemon
- `mcporter daemon start|status|stop|restart`
Codegen
- CLI: `mcporter generate-cli --server <name>` or `--command <url>`
- Inspect: `mcporter inspect-cli <path> [--json]`
- TS: `mcporter emit-ts <server> --mode client|types`
Notes
- Config default: `./config/mcporter.json` (override with `--config`).
- Prefer `--output json` for machine-readable results.

View File

@@ -0,0 +1,35 @@
---
name: nano-banana-pro
description: Generate or edit images via Gemini 3 Pro Image (Nano Banana Pro).
homepage: https://ai.google.dev/
metadata: {"clawdbot":{"emoji":"🍌","requires":{"bins":["uv"],"env":["GEMINI_API_KEY"]},"primaryEnv":"GEMINI_API_KEY","install":[{"id":"uv-brew","kind":"brew","formula":"uv","bins":["uv"],"label":"Install uv (brew)"}]}}
---
# Nano Banana Pro (Gemini 3 Pro Image)
Use the bundled script to generate or edit images.
Generate
```bash
uv run {baseDir}/scripts/generate_image.py --prompt "your image description" --filename "output.png" --resolution 1K
```
Edit (single image)
```bash
uv run {baseDir}/scripts/generate_image.py --prompt "edit instructions" --filename "output.png" -i "/path/in.png" --resolution 2K
```
Multi-image composition (up to 14 images)
```bash
uv run {baseDir}/scripts/generate_image.py --prompt "combine these into one scene" --filename "output.png" -i img1.png -i img2.png -i img3.png
```
API key
- `GEMINI_API_KEY` env var
- Or set `skills."nano-banana-pro".apiKey` / `skills."nano-banana-pro".env.GEMINI_API_KEY` in `~/.clawdbot/clawdbot.json`
Notes
- Resolutions: `1K` (default), `2K`, `4K`.
- Use timestamps in filenames: `yyyy-mm-dd-hh-mm-ss-name.png`.
- The script prints a `MEDIA:` line for Clawdbot to auto-attach on supported chat providers.
- Do not read the image back; report the saved path only.

View File

@@ -0,0 +1,184 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "google-genai>=1.0.0",
# "pillow>=10.0.0",
# ]
# ///
"""
Generate images using Google's Nano Banana Pro (Gemini 3 Pro Image) API.
Usage:
uv run generate_image.py --prompt "your image description" --filename "output.png" [--resolution 1K|2K|4K] [--api-key KEY]
Multi-image editing (up to 14 images):
uv run generate_image.py --prompt "combine these images" --filename "output.png" -i img1.png -i img2.png -i img3.png
"""
import argparse
import os
import sys
from pathlib import Path
def get_api_key(provided_key: str | None) -> str | None:
"""Get API key from argument first, then environment."""
if provided_key:
return provided_key
return os.environ.get("GEMINI_API_KEY")
def main():
parser = argparse.ArgumentParser(
description="Generate images using Nano Banana Pro (Gemini 3 Pro Image)"
)
parser.add_argument(
"--prompt", "-p",
required=True,
help="Image description/prompt"
)
parser.add_argument(
"--filename", "-f",
required=True,
help="Output filename (e.g., sunset-mountains.png)"
)
parser.add_argument(
"--input-image", "-i",
action="append",
dest="input_images",
metavar="IMAGE",
help="Input image path(s) for editing/composition. Can be specified multiple times (up to 14 images)."
)
parser.add_argument(
"--resolution", "-r",
choices=["1K", "2K", "4K"],
default="1K",
help="Output resolution: 1K (default), 2K, or 4K"
)
parser.add_argument(
"--api-key", "-k",
help="Gemini API key (overrides GEMINI_API_KEY env var)"
)
args = parser.parse_args()
# Get API key
api_key = get_api_key(args.api_key)
if not api_key:
print("Error: No API key provided.", file=sys.stderr)
print("Please either:", file=sys.stderr)
print(" 1. Provide --api-key argument", file=sys.stderr)
print(" 2. Set GEMINI_API_KEY environment variable", file=sys.stderr)
sys.exit(1)
# Import here after checking API key to avoid slow import on error
from google import genai
from google.genai import types
from PIL import Image as PILImage
# Initialise client
client = genai.Client(api_key=api_key)
# Set up output path
output_path = Path(args.filename)
output_path.parent.mkdir(parents=True, exist_ok=True)
# Load input images if provided (up to 14 supported by Nano Banana Pro)
input_images = []
output_resolution = args.resolution
if args.input_images:
if len(args.input_images) > 14:
print(f"Error: Too many input images ({len(args.input_images)}). Maximum is 14.", file=sys.stderr)
sys.exit(1)
max_input_dim = 0
for img_path in args.input_images:
try:
img = PILImage.open(img_path)
input_images.append(img)
print(f"Loaded input image: {img_path}")
# Track largest dimension for auto-resolution
width, height = img.size
max_input_dim = max(max_input_dim, width, height)
except Exception as e:
print(f"Error loading input image '{img_path}': {e}", file=sys.stderr)
sys.exit(1)
# Auto-detect resolution from largest input if not explicitly set
if args.resolution == "1K" and max_input_dim > 0: # Default value
if max_input_dim >= 3000:
output_resolution = "4K"
elif max_input_dim >= 1500:
output_resolution = "2K"
else:
output_resolution = "1K"
print(f"Auto-detected resolution: {output_resolution} (from max input dimension {max_input_dim})")
# Build contents (images first if editing, prompt only if generating)
if input_images:
contents = [*input_images, args.prompt]
img_count = len(input_images)
print(f"Processing {img_count} image{'s' if img_count > 1 else ''} with resolution {output_resolution}...")
else:
contents = args.prompt
print(f"Generating image with resolution {output_resolution}...")
try:
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents=contents,
config=types.GenerateContentConfig(
response_modalities=["TEXT", "IMAGE"],
image_config=types.ImageConfig(
image_size=output_resolution
)
)
)
# Process response and convert to PNG
image_saved = False
for part in response.parts:
if part.text is not None:
print(f"Model response: {part.text}")
elif part.inline_data is not None:
# Convert inline data to PIL Image and save as PNG
from io import BytesIO
# inline_data.data is already bytes, not base64
image_data = part.inline_data.data
if isinstance(image_data, str):
# If it's a string, it might be base64
import base64
image_data = base64.b64decode(image_data)
image = PILImage.open(BytesIO(image_data))
# Ensure RGB mode for PNG (convert RGBA to RGB with white background if needed)
if image.mode == 'RGBA':
rgb_image = PILImage.new('RGB', image.size, (255, 255, 255))
rgb_image.paste(image, mask=image.split()[3])
rgb_image.save(str(output_path), 'PNG')
elif image.mode == 'RGB':
image.save(str(output_path), 'PNG')
else:
image.convert('RGB').save(str(output_path), 'PNG')
image_saved = True
if image_saved:
full_path = output_path.resolve()
print(f"\nImage saved: {full_path}")
# Clawdbot parses MEDIA tokens and will attach the file on supported providers.
print(f"MEDIA: {full_path}")
else:
print("Error: No image was generated in the response.", file=sys.stderr)
sys.exit(1)
except Exception as e:
print(f"Error generating image: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

20
.skills/nano-pdf/SKILL.md Normal file
View File

@@ -0,0 +1,20 @@
---
name: nano-pdf
description: Edit PDFs with natural-language instructions using the nano-pdf CLI.
homepage: https://pypi.org/project/nano-pdf/
metadata: {"clawdbot":{"emoji":"📄","requires":{"bins":["nano-pdf"]},"install":[{"id":"uv","kind":"uv","package":"nano-pdf","bins":["nano-pdf"],"label":"Install nano-pdf (uv)"}]}}
---
# nano-pdf
Use `nano-pdf` to apply edits to a specific page in a PDF using a natural-language instruction.
## Quick start
```bash
nano-pdf edit deck.pdf 1 "Change the title to 'Q3 Results' and fix the typo in the subtitle"
```
Notes:
- Page numbers are 0-based or 1-based depending on the tools version/config; if the result looks off by one, retry with the other.
- Always sanity-check the output PDF before sending it out.

156
.skills/notion/SKILL.md Normal file
View File

@@ -0,0 +1,156 @@
---
name: notion
description: Notion API for creating and managing pages, databases, and blocks.
homepage: https://developers.notion.com
metadata: {"clawdbot":{"emoji":"📝","requires":{"env":["NOTION_API_KEY"]},"primaryEnv":"NOTION_API_KEY"}}
---
# notion
Use the Notion API to create/read/update pages, data sources (databases), and blocks.
## Setup
1. Create an integration at https://notion.so/my-integrations
2. Copy the API key (starts with `ntn_` or `secret_`)
3. Store it:
```bash
mkdir -p ~/.config/notion
echo "ntn_your_key_here" > ~/.config/notion/api_key
```
4. Share target pages/databases with your integration (click "..." → "Connect to" → your integration name)
## API Basics
All requests need:
```bash
NOTION_KEY=$(cat ~/.config/notion/api_key)
curl -X GET "https://api.notion.com/v1/..." \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json"
```
> **Note:** The `Notion-Version` header is required. This skill uses `2025-09-03` (latest). In this version, databases are called "data sources" in the API.
## Common Operations
**Search for pages and data sources:**
```bash
curl -X POST "https://api.notion.com/v1/search" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json" \
-d '{"query": "page title"}'
```
**Get page:**
```bash
curl "https://api.notion.com/v1/pages/{page_id}" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03"
```
**Get page content (blocks):**
```bash
curl "https://api.notion.com/v1/blocks/{page_id}/children" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03"
```
**Create page in a data source:**
```bash
curl -X POST "https://api.notion.com/v1/pages" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json" \
-d '{
"parent": {"database_id": "xxx"},
"properties": {
"Name": {"title": [{"text": {"content": "New Item"}}]},
"Status": {"select": {"name": "Todo"}}
}
}'
```
**Query a data source (database):**
```bash
curl -X POST "https://api.notion.com/v1/data_sources/{data_source_id}/query" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json" \
-d '{
"filter": {"property": "Status", "select": {"equals": "Active"}},
"sorts": [{"property": "Date", "direction": "descending"}]
}'
```
**Create a data source (database):**
```bash
curl -X POST "https://api.notion.com/v1/data_sources" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json" \
-d '{
"parent": {"page_id": "xxx"},
"title": [{"text": {"content": "My Database"}}],
"properties": {
"Name": {"title": {}},
"Status": {"select": {"options": [{"name": "Todo"}, {"name": "Done"}]}},
"Date": {"date": {}}
}
}'
```
**Update page properties:**
```bash
curl -X PATCH "https://api.notion.com/v1/pages/{page_id}" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json" \
-d '{"properties": {"Status": {"select": {"name": "Done"}}}}'
```
**Add blocks to page:**
```bash
curl -X PATCH "https://api.notion.com/v1/blocks/{page_id}/children" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json" \
-d '{
"children": [
{"object": "block", "type": "paragraph", "paragraph": {"rich_text": [{"text": {"content": "Hello"}}]}}
]
}'
```
## Property Types
Common property formats for database items:
- **Title:** `{"title": [{"text": {"content": "..."}}]}`
- **Rich text:** `{"rich_text": [{"text": {"content": "..."}}]}`
- **Select:** `{"select": {"name": "Option"}}`
- **Multi-select:** `{"multi_select": [{"name": "A"}, {"name": "B"}]}`
- **Date:** `{"date": {"start": "2024-01-15", "end": "2024-01-16"}}`
- **Checkbox:** `{"checkbox": true}`
- **Number:** `{"number": 42}`
- **URL:** `{"url": "https://..."}`
- **Email:** `{"email": "a@b.com"}`
- **Relation:** `{"relation": [{"id": "page_id"}]}`
## Key Differences in 2025-09-03
- **Databases → Data Sources:** Use `/data_sources/` endpoints for queries and retrieval
- **Two IDs:** Each database now has both a `database_id` and a `data_source_id`
- Use `database_id` when creating pages (`parent: {"database_id": "..."}`)
- Use `data_source_id` when querying (`POST /v1/data_sources/{id}/query`)
- **Search results:** Databases return as `"object": "data_source"` with their `data_source_id`
- **Parent in responses:** Pages show `parent.data_source_id` alongside `parent.database_id`
- **Finding the data_source_id:** Search for the database, or call `GET /v1/data_sources/{data_source_id}`
## Notes
- Page/database IDs are UUIDs (with or without dashes)
- The API cannot set database view filters — that's UI-only
- Rate limit: ~3 requests/second average
- Use `is_inline: true` when creating data sources to embed them in pages

55
.skills/obsidian/SKILL.md Normal file
View File

@@ -0,0 +1,55 @@
---
name: obsidian
description: Work with Obsidian vaults (plain Markdown notes) and automate via obsidian-cli.
homepage: https://help.obsidian.md
metadata: {"clawdbot":{"emoji":"💎","requires":{"bins":["obsidian-cli"]},"install":[{"id":"brew","kind":"brew","formula":"yakitrak/yakitrak/obsidian-cli","bins":["obsidian-cli"],"label":"Install obsidian-cli (brew)"}]}}
---
# Obsidian
Obsidian vault = a normal folder on disk.
Vault structure (typical)
- Notes: `*.md` (plain text Markdown; edit with any editor)
- Config: `.obsidian/` (workspace + plugin settings; usually dont touch from scripts)
- Canvases: `*.canvas` (JSON)
- Attachments: whatever folder you chose in Obsidian settings (images/PDFs/etc.)
## Find the active vault(s)
Obsidian desktop tracks vaults here (source of truth):
- `~/Library/Application Support/obsidian/obsidian.json`
`obsidian-cli` resolves vaults from that file; vault name is typically the **folder name** (path suffix).
Fast “what vault is active / where are the notes?”
- If youve already set a default: `obsidian-cli print-default --path-only`
- Otherwise, read `~/Library/Application Support/obsidian/obsidian.json` and use the vault entry with `"open": true`.
Notes
- Multiple vaults common (iCloud vs `~/Documents`, work/personal, etc.). Dont guess; read config.
- Avoid writing hardcoded vault paths into scripts; prefer reading the config or using `print-default`.
## obsidian-cli quick start
Pick a default vault (once):
- `obsidian-cli set-default "<vault-folder-name>"`
- `obsidian-cli print-default` / `obsidian-cli print-default --path-only`
Search
- `obsidian-cli search "query"` (note names)
- `obsidian-cli search-content "query"` (inside notes; shows snippets + lines)
Create
- `obsidian-cli create "Folder/New note" --content "..." --open`
- Requires Obsidian URI handler (`obsidian://…`) working (Obsidian installed).
- Avoid creating notes under “hidden” dot-folders (e.g. `.something/...`) via URI; Obsidian may refuse.
Move/rename (safe refactor)
- `obsidian-cli move "old/path/note" "new/path/note"`
- Updates `[[wikilinks]]` and common Markdown links across the vault (this is the main win vs `mv`).
Delete
- `obsidian-cli delete "path/note"`
Prefer direct edits when appropriate: open the `.md` file and change it; Obsidian will pick it up.

View File

@@ -0,0 +1,71 @@
---
name: openai-image-gen
description: Batch-generate images via OpenAI Images API. Random prompt sampler + `index.html` gallery.
homepage: https://platform.openai.com/docs/api-reference/images
metadata: {"clawdbot":{"emoji":"🖼️","requires":{"bins":["python3"],"env":["OPENAI_API_KEY"]},"primaryEnv":"OPENAI_API_KEY","install":[{"id":"python-brew","kind":"brew","formula":"python","bins":["python3"],"label":"Install Python (brew)"}]}}
---
# OpenAI Image Gen
Generate a handful of “random but structured” prompts and render them via the OpenAI Images API.
## Run
```bash
python3 {baseDir}/scripts/gen.py
open ~/Projects/tmp/openai-image-gen-*/index.html # if ~/Projects/tmp exists; else ./tmp/...
```
Useful flags:
```bash
# GPT image models with various options
python3 {baseDir}/scripts/gen.py --count 16 --model gpt-image-1
python3 {baseDir}/scripts/gen.py --prompt "ultra-detailed studio photo of a lobster astronaut" --count 4
python3 {baseDir}/scripts/gen.py --size 1536x1024 --quality high --out-dir ./out/images
python3 {baseDir}/scripts/gen.py --model gpt-image-1.5 --background transparent --output-format webp
# DALL-E 3 (note: count is automatically limited to 1)
python3 {baseDir}/scripts/gen.py --model dall-e-3 --quality hd --size 1792x1024 --style vivid
python3 {baseDir}/scripts/gen.py --model dall-e-3 --style natural --prompt "serene mountain landscape"
# DALL-E 2
python3 {baseDir}/scripts/gen.py --model dall-e-2 --size 512x512 --count 4
```
## Model-Specific Parameters
Different models support different parameter values. The script automatically selects appropriate defaults based on the model.
### Size
- **GPT image models** (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`): `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto`
- Default: `1024x1024`
- **dall-e-3**: `1024x1024`, `1792x1024`, or `1024x1792`
- Default: `1024x1024`
- **dall-e-2**: `256x256`, `512x512`, or `1024x1024`
- Default: `1024x1024`
### Quality
- **GPT image models**: `auto`, `high`, `medium`, or `low`
- Default: `high`
- **dall-e-3**: `hd` or `standard`
- Default: `standard`
- **dall-e-2**: `standard` only
- Default: `standard`
### Other Notable Differences
- **dall-e-3** only supports generating 1 image at a time (`n=1`). The script automatically limits count to 1 when using this model.
- **GPT image models** support additional parameters:
- `--background`: `transparent`, `opaque`, or `auto` (default)
- `--output-format`: `png` (default), `jpeg`, or `webp`
- Note: `stream` and `moderation` are available via API but not yet implemented in this script
- **dall-e-3** has a `--style` parameter: `vivid` (hyper-real, dramatic) or `natural` (more natural looking)
## Output
- `*.png`, `*.jpeg`, or `*.webp` images (output format depends on model + `--output-format`)
- `prompts.json` (prompt → file mapping)
- `index.html` (thumbnail gallery)

View File

@@ -0,0 +1,240 @@
#!/usr/bin/env python3
import argparse
import base64
import datetime as dt
import json
import os
import random
import re
import sys
import urllib.error
import urllib.request
from pathlib import Path
def slugify(text: str) -> str:
text = text.lower().strip()
text = re.sub(r"[^a-z0-9]+", "-", text)
text = re.sub(r"-{2,}", "-", text).strip("-")
return text or "image"
def default_out_dir() -> Path:
now = dt.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
preferred = Path.home() / "Projects" / "tmp"
base = preferred if preferred.is_dir() else Path("./tmp")
base.mkdir(parents=True, exist_ok=True)
return base / f"openai-image-gen-{now}"
def pick_prompts(count: int) -> list[str]:
subjects = [
"a lobster astronaut",
"a brutalist lighthouse",
"a cozy reading nook",
"a cyberpunk noodle shop",
"a Vienna street at dusk",
"a minimalist product photo",
"a surreal underwater library",
]
styles = [
"ultra-detailed studio photo",
"35mm film still",
"isometric illustration",
"editorial photography",
"soft watercolor",
"architectural render",
"high-contrast monochrome",
]
lighting = [
"golden hour",
"overcast soft light",
"neon lighting",
"dramatic rim light",
"candlelight",
"foggy atmosphere",
]
prompts: list[str] = []
for _ in range(count):
prompts.append(
f"{random.choice(styles)} of {random.choice(subjects)}, {random.choice(lighting)}"
)
return prompts
def get_model_defaults(model: str) -> tuple[str, str]:
"""Return (default_size, default_quality) for the given model."""
if model == "dall-e-2":
# quality will be ignored
return ("1024x1024", "standard")
elif model == "dall-e-3":
return ("1024x1024", "standard")
else:
# GPT image or future models
return ("1024x1024", "high")
def request_images(
api_key: str,
prompt: str,
model: str,
size: str,
quality: str,
background: str = "",
output_format: str = "",
style: str = "",
) -> dict:
url = "https://api.openai.com/v1/images/generations"
args = {
"model": model,
"prompt": prompt,
"size": size,
"n": 1,
}
# Quality parameter - dall-e-2 doesn't accept this parameter
if model != "dall-e-2":
args["quality"] = quality
# Note: response_format no longer supported by OpenAI Images API
# dall-e models now return URLs by default
if model.startswith("gpt-image"):
if background:
args["background"] = background
if output_format:
args["output_format"] = output_format
if model == "dall-e-3" and style:
args["style"] = style
body = json.dumps(args).encode("utf-8")
req = urllib.request.Request(
url,
method="POST",
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
},
data=body,
)
try:
with urllib.request.urlopen(req, timeout=300) as resp:
return json.loads(resp.read().decode("utf-8"))
except urllib.error.HTTPError as e:
payload = e.read().decode("utf-8", errors="replace")
raise RuntimeError(f"OpenAI Images API failed ({e.code}): {payload}") from e
def write_gallery(out_dir: Path, items: list[dict]) -> None:
thumbs = "\n".join(
[
f"""
<figure>
<a href="{it["file"]}"><img src="{it["file"]}" loading="lazy" /></a>
<figcaption>{it["prompt"]}</figcaption>
</figure>
""".strip()
for it in items
]
)
html = f"""<!doctype html>
<meta charset="utf-8" />
<title>openai-image-gen</title>
<style>
:root {{ color-scheme: dark; }}
body {{ margin: 24px; font: 14px/1.4 ui-sans-serif, system-ui; background: #0b0f14; color: #e8edf2; }}
h1 {{ font-size: 18px; margin: 0 0 16px; }}
.grid {{ display: grid; grid-template-columns: repeat(auto-fill, minmax(240px, 1fr)); gap: 16px; }}
figure {{ margin: 0; padding: 12px; border: 1px solid #1e2a36; border-radius: 14px; background: #0f1620; }}
img {{ width: 100%; height: auto; border-radius: 10px; display: block; }}
figcaption {{ margin-top: 10px; color: #b7c2cc; }}
code {{ color: #9cd1ff; }}
</style>
<h1>openai-image-gen</h1>
<p>Output: <code>{out_dir.as_posix()}</code></p>
<div class="grid">
{thumbs}
</div>
"""
(out_dir / "index.html").write_text(html, encoding="utf-8")
def main() -> int:
ap = argparse.ArgumentParser(description="Generate images via OpenAI Images API.")
ap.add_argument("--prompt", help="Single prompt. If omitted, random prompts are generated.")
ap.add_argument("--count", type=int, default=8, help="How many images to generate.")
ap.add_argument("--model", default="gpt-image-1", help="Image model id.")
ap.add_argument("--size", default="", help="Image size (e.g. 1024x1024, 1536x1024). Defaults based on model if not specified.")
ap.add_argument("--quality", default="", help="Image quality (e.g. high, standard). Defaults based on model if not specified.")
ap.add_argument("--background", default="", help="Background transparency (GPT models only): transparent, opaque, or auto.")
ap.add_argument("--output-format", default="", help="Output format (GPT models only): png, jpeg, or webp.")
ap.add_argument("--style", default="", help="Image style (dall-e-3 only): vivid or natural.")
ap.add_argument("--out-dir", default="", help="Output directory (default: ./tmp/openai-image-gen-<ts>).")
args = ap.parse_args()
api_key = (os.environ.get("OPENAI_API_KEY") or "").strip()
if not api_key:
print("Missing OPENAI_API_KEY", file=sys.stderr)
return 2
# Apply model-specific defaults if not specified
default_size, default_quality = get_model_defaults(args.model)
size = args.size or default_size
quality = args.quality or default_quality
count = args.count
if args.model == "dall-e-3" and count > 1:
print(f"Warning: dall-e-3 only supports generating 1 image at a time. Reducing count from {count} to 1.", file=sys.stderr)
count = 1
out_dir = Path(args.out_dir).expanduser() if args.out_dir else default_out_dir()
out_dir.mkdir(parents=True, exist_ok=True)
prompts = [args.prompt] * count if args.prompt else pick_prompts(count)
# Determine file extension based on output format
if args.model.startswith("gpt-image") and args.output_format:
file_ext = args.output_format
else:
file_ext = "png"
items: list[dict] = []
for idx, prompt in enumerate(prompts, start=1):
print(f"[{idx}/{len(prompts)}] {prompt}")
res = request_images(
api_key,
prompt,
args.model,
size,
quality,
args.background,
args.output_format,
args.style,
)
data = res.get("data", [{}])[0]
image_b64 = data.get("b64_json")
image_url = data.get("url")
if not image_b64 and not image_url:
raise RuntimeError(f"Unexpected response: {json.dumps(res)[:400]}")
filename = f"{idx:03d}-{slugify(prompt)[:40]}.{file_ext}"
filepath = out_dir / filename
if image_b64:
filepath.write_bytes(base64.b64decode(image_b64))
else:
try:
urllib.request.urlretrieve(image_url, filepath)
except urllib.error.URLError as e:
raise RuntimeError(f"Failed to download image from {image_url}: {e}") from e
items.append({"prompt": prompt, "file": filename})
(out_dir / "prompts.json").write_text(json.dumps(items, indent=2), encoding="utf-8")
write_gallery(out_dir, items)
print(f"\nWrote: {(out_dir / 'index.html').as_posix()}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,43 @@
---
name: openai-whisper-api
description: Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
homepage: https://platform.openai.com/docs/guides/speech-to-text
metadata: {"clawdbot":{"emoji":"☁️","requires":{"bins":["curl"],"env":["OPENAI_API_KEY"]},"primaryEnv":"OPENAI_API_KEY"}}
---
# OpenAI Whisper API (curl)
Transcribe an audio file via OpenAIs `/v1/audio/transcriptions` endpoint.
## Quick start
```bash
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
```
Defaults:
- Model: `whisper-1`
- Output: `<input>.txt`
## Useful flags
```bash
{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
```
## API key
Set `OPENAI_API_KEY`, or configure it in `~/.clawdbot/clawdbot.json`:
```json5
{
skills: {
"openai-whisper-api": {
apiKey: "OPENAI_KEY_HERE"
}
}
}
```

View File

@@ -0,0 +1,85 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat >&2 <<'EOF'
Usage:
transcribe.sh <audio-file> [--model whisper-1] [--out /path/to/out.txt] [--language en] [--prompt "hint"] [--json]
EOF
exit 2
}
if [[ "${1:-}" == "" || "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
fi
in="${1:-}"
shift || true
model="whisper-1"
out=""
language=""
prompt=""
response_format="text"
while [[ $# -gt 0 ]]; do
case "$1" in
--model)
model="${2:-}"
shift 2
;;
--out)
out="${2:-}"
shift 2
;;
--language)
language="${2:-}"
shift 2
;;
--prompt)
prompt="${2:-}"
shift 2
;;
--json)
response_format="json"
shift 1
;;
*)
echo "Unknown arg: $1" >&2
usage
;;
esac
done
if [[ ! -f "$in" ]]; then
echo "File not found: $in" >&2
exit 1
fi
if [[ "${OPENAI_API_KEY:-}" == "" ]]; then
echo "Missing OPENAI_API_KEY" >&2
exit 1
fi
if [[ "$out" == "" ]]; then
base="${in%.*}"
if [[ "$response_format" == "json" ]]; then
out="${base}.json"
else
out="${base}.txt"
fi
fi
mkdir -p "$(dirname "$out")"
curl -sS https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Accept: application/json" \
-F "file=@${in}" \
-F "model=${model}" \
-F "response_format=${response_format}" \
${language:+-F "language=${language}"} \
${prompt:+-F "prompt=${prompt}"} \
>"$out"
echo "$out"

View File

@@ -0,0 +1,19 @@
---
name: openai-whisper
description: Local speech-to-text with the Whisper CLI (no API key).
homepage: https://openai.com/research/whisper
metadata: {"clawdbot":{"emoji":"🎙️","requires":{"bins":["whisper"]},"install":[{"id":"brew","kind":"brew","formula":"openai-whisper","bins":["whisper"],"label":"Install OpenAI Whisper (brew)"}]}}
---
# Whisper (CLI)
Use `whisper` to transcribe audio locally.
Quick start
- `whisper /path/audio.mp3 --model medium --output_format txt --output_dir .`
- `whisper /path/audio.m4a --task translate --output_format srt`
Notes
- Models download to `~/.cache/whisper` on first run.
- `--model` defaults to `turbo` on this install.
- Use smaller models for speed, larger for accuracy.

30
.skills/openhue/SKILL.md Normal file
View File

@@ -0,0 +1,30 @@
---
name: openhue
description: Control Philips Hue lights/scenes via the OpenHue CLI.
homepage: https://www.openhue.io/cli
metadata: {"clawdbot":{"emoji":"💡","requires":{"bins":["openhue"]},"install":[{"id":"brew","kind":"brew","formula":"openhue/cli/openhue-cli","bins":["openhue"],"label":"Install OpenHue CLI (brew)"}]}}
---
# OpenHue CLI
Use `openhue` to control Hue lights and scenes via a Hue Bridge.
Setup
- Discover bridges: `openhue discover`
- Guided setup: `openhue setup`
Read
- `openhue get light --json`
- `openhue get room --json`
- `openhue get scene --json`
Write
- Turn on: `openhue set light <id-or-name> --on`
- Turn off: `openhue set light <id-or-name> --off`
- Brightness: `openhue set light <id> --on --brightness 50`
- Color: `openhue set light <id> --on --rgb #3399FF`
- Scene: `openhue set scene <scene-id>`
Notes
- You may need to press the Hue Bridge button during setup.
- Use `--room "Room Name"` when light names are ambiguous.

105
.skills/oracle/SKILL.md Normal file
View File

@@ -0,0 +1,105 @@
---
name: oracle
description: Best practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
homepage: https://askoracle.dev
metadata: {"clawdbot":{"emoji":"🧿","requires":{"bins":["oracle"]},"install":[{"id":"node","kind":"node","package":"@steipete/oracle","bins":["oracle"],"label":"Install oracle (node)"}]}}
---
# oracle — best use
Oracle bundles your prompt + selected files into one “one-shot” request so another model can answer with real repo context (API or browser automation). Treat output as advisory: verify against code + tests.
## Main use case (browser, GPT5.2 Pro)
Default workflow here: `--engine browser` with GPT5.2 Pro in ChatGPT. This is the common “long think” path: ~10 minutes to ~1 hour is normal; expect a stored session you can reattach to.
Recommended defaults:
- Engine: browser (`--engine browser`)
- Model: GPT5.2 Pro (`--model gpt-5.2-pro` or `--model "5.2 Pro"`)
## Golden path
1. Pick a tight file set (fewest files that still contain the truth).
2. Preview payload + token spend (`--dry-run` + `--files-report`).
3. Use browser mode for the usual GPT5.2 Pro workflow; use API only when you explicitly want it.
4. If the run detaches/timeouts: reattach to the stored session (dont re-run).
## Commands (preferred)
- Help:
- `oracle --help`
- If the binary isnt installed: `npx -y @steipete/oracle --help` (avoid `pnpx` here; sqlite bindings).
- Preview (no tokens):
- `oracle --dry-run summary -p "<task>" --file "src/**" --file "!**/*.test.*"`
- `oracle --dry-run full -p "<task>" --file "src/**"`
- Token sanity:
- `oracle --dry-run summary --files-report -p "<task>" --file "src/**"`
- Browser run (main path; long-running is normal):
- `oracle --engine browser --model gpt-5.2-pro -p "<task>" --file "src/**"`
- Manual paste fallback:
- `oracle --render --copy -p "<task>" --file "src/**"`
- Note: `--copy` is a hidden alias for `--copy-markdown`.
## Attaching files (`--file`)
`--file` accepts files, directories, and globs. You can pass it multiple times; entries can be comma-separated.
- Include:
- `--file "src/**"`
- `--file src/index.ts`
- `--file docs --file README.md`
- Exclude:
- `--file "src/**" --file "!src/**/*.test.ts" --file "!**/*.snap"`
- Defaults (implementation behavior):
- Default-ignored dirs: `node_modules`, `dist`, `coverage`, `.git`, `.turbo`, `.next`, `build`, `tmp` (skipped unless explicitly passed as literal dirs/files).
- Honors `.gitignore` when expanding globs.
- Does not follow symlinks.
- Dotfiles filtered unless opted in via pattern (e.g. `--file ".github/**"`).
- Files > 1 MB rejected.
## Engines (API vs browser)
- Auto-pick: `api` when `OPENAI_API_KEY` is set; otherwise `browser`.
- Browser supports GPT + Gemini only; use `--engine api` for Claude/Grok/Codex or multi-model runs.
- Browser attachments:
- `--browser-attachments auto|never|always` (auto pastes inline up to ~60k chars then uploads).
- Remote browser host:
- Host: `oracle serve --host 0.0.0.0 --port 9473 --token <secret>`
- Client: `oracle --engine browser --remote-host <host:port> --remote-token <secret> -p "<task>" --file "src/**"`
## Sessions + slugs
- Stored under `~/.oracle/sessions` (override with `ORACLE_HOME_DIR`).
- Runs may detach or take a long time (browser + GPT5.2 Pro often does). If the CLI times out: dont re-run; reattach.
- List: `oracle status --hours 72`
- Attach: `oracle session <id> --render`
- Use `--slug "<3-5 words>"` to keep session IDs readable.
- Duplicate prompt guard exists; use `--force` only when you truly want a fresh run.
## Prompt template (high signal)
Oracle starts with **zero** project knowledge. Assume the model cannot infer your stack, build tooling, conventions, or “obvious” paths. Include:
- Project briefing (stack + build/test commands + platform constraints).
- “Where things live” (key directories, entrypoints, config files, boundaries).
- Exact question + what you tried + the error text (verbatim).
- Constraints (“dont change X”, “must keep public API”, etc).
- Desired output (“return patch plan + tests”, “give 3 options with tradeoffs”).
## Safety
- Dont attach secrets by default (`.env`, key files, auth tokens). Redact aggressively; share only whats required.
## “Exhaustive prompt” restoration pattern
For long investigations, write a standalone prompt + file set so you can rerun days later:
- 630 sentence project briefing + the goal.
- Repro steps + exact errors + what you tried.
- Attach all context files needed (entrypoints, configs, key modules, docs).
Oracle runs are one-shot; the model doesnt remember prior runs. “Restoring context” means re-running with the same prompt + `--file …` set (or reattaching a still-running stored session).

47
.skills/ordercli/SKILL.md Normal file
View File

@@ -0,0 +1,47 @@
---
name: ordercli
description: Foodora-only CLI for checking past orders and active order status (Deliveroo WIP).
homepage: https://ordercli.sh
metadata: {"clawdbot":{"emoji":"🛵","requires":{"bins":["ordercli"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/ordercli","bins":["ordercli"],"label":"Install ordercli (brew)"},{"id":"go","kind":"go","module":"github.com/steipete/ordercli/cmd/ordercli@latest","bins":["ordercli"],"label":"Install ordercli (go)"}]}}
---
# ordercli
Use `ordercli` to check past orders and track active order status (Foodora only right now).
Quick start (Foodora)
- `ordercli foodora countries`
- `ordercli foodora config set --country AT`
- `ordercli foodora login --email you@example.com --password-stdin`
- `ordercli foodora orders`
- `ordercli foodora history --limit 20`
- `ordercli foodora history show <orderCode>`
Orders
- Active list (arrival/status): `ordercli foodora orders`
- Watch: `ordercli foodora orders --watch`
- Active order detail: `ordercli foodora order <orderCode>`
- History detail JSON: `ordercli foodora history show <orderCode> --json`
Reorder (adds to cart)
- Preview: `ordercli foodora reorder <orderCode>`
- Confirm: `ordercli foodora reorder <orderCode> --confirm`
- Address: `ordercli foodora reorder <orderCode> --confirm --address-id <id>`
Cloudflare / bot protection
- Browser login: `ordercli foodora login --email you@example.com --password-stdin --browser`
- Reuse profile: `--browser-profile "$HOME/Library/Application Support/ordercli/browser-profile"`
- Import Chrome cookies: `ordercli foodora cookies chrome --profile "Default"`
Session import (no password)
- `ordercli foodora session chrome --url https://www.foodora.at/ --profile "Default"`
- `ordercli foodora session refresh --client-id android`
Deliveroo (WIP, not working yet)
- Requires `DELIVEROO_BEARER_TOKEN` (optional `DELIVEROO_COOKIE`).
- `ordercli deliveroo config set --market uk`
- `ordercli deliveroo history`
Notes
- Use `--config /tmp/ordercli.json` for testing.
- Confirm before any reorder or cart-changing action.

153
.skills/peekaboo/SKILL.md Normal file
View File

@@ -0,0 +1,153 @@
---
name: peekaboo
description: Capture and automate macOS UI with the Peekaboo CLI.
homepage: https://peekaboo.boo
metadata: {"clawdbot":{"emoji":"👀","os":["darwin"],"requires":{"bins":["peekaboo"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/peekaboo","bins":["peekaboo"],"label":"Install Peekaboo (brew)"}]}}
---
# Peekaboo
Peekaboo is a full macOS UI automation CLI: capture/inspect screens, target UI
elements, drive input, and manage apps/windows/menus. Commands share a snapshot
cache and support `--json`/`-j` for scripting. Run `peekaboo` or
`peekaboo <cmd> --help` for flags; `peekaboo --version` prints build metadata.
Tip: run via `polter peekaboo` to ensure fresh builds.
## Features (all CLI capabilities, excluding agent/MCP)
Core
- `bridge`: inspect Peekaboo Bridge host connectivity
- `capture`: live capture or video ingest + frame extraction
- `clean`: prune snapshot cache and temp files
- `config`: init/show/edit/validate, providers, models, credentials
- `image`: capture screenshots (screen/window/menu bar regions)
- `learn`: print the full agent guide + tool catalog
- `list`: apps, windows, screens, menubar, permissions
- `permissions`: check Screen Recording/Accessibility status
- `run`: execute `.peekaboo.json` scripts
- `sleep`: pause execution for a duration
- `tools`: list available tools with filtering/display options
Interaction
- `click`: target by ID/query/coords with smart waits
- `drag`: drag & drop across elements/coords/Dock
- `hotkey`: modifier combos like `cmd,shift,t`
- `move`: cursor positioning with optional smoothing
- `paste`: set clipboard -> paste -> restore
- `press`: special-key sequences with repeats
- `scroll`: directional scrolling (targeted + smooth)
- `swipe`: gesture-style drags between targets
- `type`: text + control keys (`--clear`, delays)
System
- `app`: launch/quit/relaunch/hide/unhide/switch/list apps
- `clipboard`: read/write clipboard (text/images/files)
- `dialog`: click/input/file/dismiss/list system dialogs
- `dock`: launch/right-click/hide/show/list Dock items
- `menu`: click/list application menus + menu extras
- `menubar`: list/click status bar items
- `open`: enhanced `open` with app targeting + JSON payloads
- `space`: list/switch/move-window (Spaces)
- `visualizer`: exercise Peekaboo visual feedback animations
- `window`: close/minimize/maximize/move/resize/focus/list
Vision
- `see`: annotated UI maps, snapshot IDs, optional analysis
Global runtime flags
- `--json`/`-j`, `--verbose`/`-v`, `--log-level <level>`
- `--no-remote`, `--bridge-socket <path>`
## Quickstart (happy path)
```bash
peekaboo permissions
peekaboo list apps --json
peekaboo see --annotate --path /tmp/peekaboo-see.png
peekaboo click --on B1
peekaboo type "Hello" --return
```
## Common targeting parameters (most interaction commands)
- App/window: `--app`, `--pid`, `--window-title`, `--window-id`, `--window-index`
- Snapshot targeting: `--snapshot` (ID from `see`; defaults to latest)
- Element/coords: `--on`/`--id` (element ID), `--coords x,y`
- Focus control: `--no-auto-focus`, `--space-switch`, `--bring-to-current-space`,
`--focus-timeout-seconds`, `--focus-retry-count`
## Common capture parameters
- Output: `--path`, `--format png|jpg`, `--retina`
- Targeting: `--mode screen|window|frontmost`, `--screen-index`,
`--window-title`, `--window-id`
- Analysis: `--analyze "prompt"`, `--annotate`
- Capture engine: `--capture-engine auto|classic|cg|modern|sckit`
## Common motion/typing parameters
- Timing: `--duration` (drag/swipe), `--steps`, `--delay` (type/scroll/press)
- Human-ish movement: `--profile human|linear`, `--wpm` (typing)
- Scroll: `--direction up|down|left|right`, `--amount <ticks>`, `--smooth`
## Examples
### See -> click -> type (most reliable flow)
```bash
peekaboo see --app Safari --window-title "Login" --annotate --path /tmp/see.png
peekaboo click --on B3 --app Safari
peekaboo type "user@example.com" --app Safari
peekaboo press tab --count 1 --app Safari
peekaboo type "supersecret" --app Safari --return
```
### Target by window id
```bash
peekaboo list windows --app "Visual Studio Code" --json
peekaboo click --window-id 12345 --coords 120,160
peekaboo type "Hello from Peekaboo" --window-id 12345
```
### Capture screenshots + analyze
```bash
peekaboo image --mode screen --screen-index 0 --retina --path /tmp/screen.png
peekaboo image --app Safari --window-title "Dashboard" --analyze "Summarize KPIs"
peekaboo see --mode screen --screen-index 0 --analyze "Summarize the dashboard"
```
### Live capture (motion-aware)
```bash
peekaboo capture live --mode region --region 100,100,800,600 --duration 30 \
--active-fps 8 --idle-fps 2 --highlight-changes --path /tmp/capture
```
### App + window management
```bash
peekaboo app launch "Safari" --open https://example.com
peekaboo window focus --app Safari --window-title "Example"
peekaboo window set-bounds --app Safari --x 50 --y 50 --width 1200 --height 800
peekaboo app quit --app Safari
```
### Menus, menubar, dock
```bash
peekaboo menu click --app Safari --item "New Window"
peekaboo menu click --app TextEdit --path "Format > Font > Show Fonts"
peekaboo menu click-extra --title "WiFi"
peekaboo dock launch Safari
peekaboo menubar list --json
```
### Mouse + gesture input
```bash
peekaboo move 500,300 --smooth
peekaboo drag --from B1 --to T2
peekaboo swipe --from-coords 100,500 --to-coords 100,200 --duration 800
peekaboo scroll --direction down --amount 6 --smooth
```
### Keyboard input
```bash
peekaboo hotkey --keys "cmd,shift,t"
peekaboo press escape
peekaboo type "Line 1\nLine 2" --delay 10
```
Notes
- Requires Screen Recording + Accessibility permissions.
- Use `peekaboo see --annotate` to identify targets before clicking.

62
.skills/sag/SKILL.md Normal file
View File

@@ -0,0 +1,62 @@
---
name: sag
description: ElevenLabs text-to-speech with mac-style say UX.
homepage: https://sag.sh
metadata: {"clawdbot":{"emoji":"🗣️","requires":{"bins":["sag"],"env":["ELEVENLABS_API_KEY"]},"primaryEnv":"ELEVENLABS_API_KEY","install":[{"id":"brew","kind":"brew","formula":"steipete/tap/sag","bins":["sag"],"label":"Install sag (brew)"}]}}
---
# sag
Use `sag` for ElevenLabs TTS with local playback.
API key (required)
- `ELEVENLABS_API_KEY` (preferred)
- `SAG_API_KEY` also supported by the CLI
Quick start
- `sag "Hello there"`
- `sag speak -v "Roger" "Hello"`
- `sag voices`
- `sag prompting` (model-specific tips)
Model notes
- Default: `eleven_v3` (expressive)
- Stable: `eleven_multilingual_v2`
- Fast: `eleven_flash_v2_5`
Pronunciation + delivery rules
- First fix: respell (e.g. "key-note"), add hyphens, adjust casing.
- Numbers/units/URLs: `--normalize auto` (or `off` if it harms names).
- Language bias: `--lang en|de|fr|...` to guide normalization.
- v3: SSML `<break>` not supported; use `[pause]`, `[short pause]`, `[long pause]`.
- v2/v2.5: SSML `<break time="1.5s" />` supported; `<phoneme>` not exposed in `sag`.
v3 audio tags (put at the entrance of a line)
- `[whispers]`, `[shouts]`, `[sings]`
- `[laughs]`, `[starts laughing]`, `[sighs]`, `[exhales]`
- `[sarcastic]`, `[curious]`, `[excited]`, `[crying]`, `[mischievously]`
- Example: `sag "[whispers] keep this quiet. [short pause] ok?"`
Voice defaults
- `ELEVENLABS_VOICE_ID` or `SAG_VOICE_ID`
Confirm voice + speaker before long output.
## Chat voice responses
When Peter asks for a "voice" reply (e.g., "crazy scientist voice", "explain in voice"), generate audio and send it:
```bash
# Generate audio file
sag -v Clawd -o /tmp/voice-reply.mp3 "Your message here"
# Then include in reply:
# MEDIA:/tmp/voice-reply.mp3
```
Voice character tips:
- Crazy scientist: Use `[excited]` tags, dramatic pauses `[short pause]`, vary intensity
- Calm: Use `[whispers]` or slower pacing
- Dramatic: Use `[sings]` or `[shouts]` sparingly
Default voice for Clawd: `lj2rcrvANS3gaWWnczSX` (or just `-v Clawd`)

View File

@@ -0,0 +1,49 @@
---
name: sherpa-onnx-tts
description: Local text-to-speech via sherpa-onnx (offline, no cloud)
metadata: {"clawdbot":{"emoji":"🗣️","os":["darwin","linux","win32"],"requires":{"env":["SHERPA_ONNX_RUNTIME_DIR","SHERPA_ONNX_MODEL_DIR"]},"install":[{"id":"download-runtime-macos","kind":"download","os":["darwin"],"url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-osx-universal2-shared.tar.bz2","archive":"tar.bz2","extract":true,"stripComponents":1,"targetDir":"~/.clawdbot/tools/sherpa-onnx-tts/runtime","label":"Download sherpa-onnx runtime (macOS)"},{"id":"download-runtime-linux-x64","kind":"download","os":["linux"],"url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-linux-x64-shared.tar.bz2","archive":"tar.bz2","extract":true,"stripComponents":1,"targetDir":"~/.clawdbot/tools/sherpa-onnx-tts/runtime","label":"Download sherpa-onnx runtime (Linux x64)"},{"id":"download-runtime-win-x64","kind":"download","os":["win32"],"url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-win-x64-shared.tar.bz2","archive":"tar.bz2","extract":true,"stripComponents":1,"targetDir":"~/.clawdbot/tools/sherpa-onnx-tts/runtime","label":"Download sherpa-onnx runtime (Windows x64)"},{"id":"download-model-lessac","kind":"download","url":"https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-lessac-high.tar.bz2","archive":"tar.bz2","extract":true,"targetDir":"~/.clawdbot/tools/sherpa-onnx-tts/models","label":"Download Piper en_US lessac (high)"}]}}
---
# sherpa-onnx-tts
Local TTS using the sherpa-onnx offline CLI.
## Install
1) Download the runtime for your OS (extracts into `~/.clawdbot/tools/sherpa-onnx-tts/runtime`)
2) Download a voice model (extracts into `~/.clawdbot/tools/sherpa-onnx-tts/models`)
Update `~/.clawdbot/clawdbot.json`:
```json5
{
skills: {
entries: {
"sherpa-onnx-tts": {
env: {
SHERPA_ONNX_RUNTIME_DIR: "~/.clawdbot/tools/sherpa-onnx-tts/runtime",
SHERPA_ONNX_MODEL_DIR: "~/.clawdbot/tools/sherpa-onnx-tts/models/vits-piper-en_US-lessac-high"
}
}
}
}
}
```
The wrapper lives in this skill folder. Run it directly, or add the wrapper to PATH:
```bash
export PATH="{baseDir}/bin:$PATH"
```
## Usage
```bash
{baseDir}/bin/sherpa-onnx-tts -o ./tts.wav "Hello from local TTS."
```
Notes:
- Pick a different model from the sherpa-onnx `tts-models` release if you want another voice.
- If the model dir has multiple `.onnx` files, set `SHERPA_ONNX_MODEL_FILE` or pass `--model-file`.
- You can also pass `--tokens-file` or `--data-dir` to override the defaults.
- Windows: run `node {baseDir}\\bin\\sherpa-onnx-tts -o tts.wav "Hello from local TTS."`

View File

@@ -0,0 +1,178 @@
#!/usr/bin/env node
const fs = require("node:fs");
const path = require("node:path");
const { spawnSync } = require("node:child_process");
function usage(message) {
if (message) {
console.error(message);
}
console.error(
"\nUsage: sherpa-onnx-tts [--runtime-dir <dir>] [--model-dir <dir>] [--model-file <file>] [--tokens-file <file>] [--data-dir <dir>] [--output <file>] \"text\"",
);
console.error("\nRequired env (or flags):\n SHERPA_ONNX_RUNTIME_DIR\n SHERPA_ONNX_MODEL_DIR");
process.exit(1);
}
function resolveRuntimeDir(explicit) {
const value = explicit || process.env.SHERPA_ONNX_RUNTIME_DIR || "";
return value.trim();
}
function resolveModelDir(explicit) {
const value = explicit || process.env.SHERPA_ONNX_MODEL_DIR || "";
return value.trim();
}
function resolveModelFile(modelDir, explicitFlag) {
const explicit = (explicitFlag || process.env.SHERPA_ONNX_MODEL_FILE || "").trim();
if (explicit) return explicit;
try {
const candidates = fs
.readdirSync(modelDir)
.filter((entry) => entry.endsWith(".onnx"))
.map((entry) => path.join(modelDir, entry));
if (candidates.length === 1) return candidates[0];
} catch {
return "";
}
return "";
}
function resolveTokensFile(modelDir, explicitFlag) {
const explicit = (explicitFlag || process.env.SHERPA_ONNX_TOKENS_FILE || "").trim();
if (explicit) return explicit;
const candidate = path.join(modelDir, "tokens.txt");
return fs.existsSync(candidate) ? candidate : "";
}
function resolveDataDir(modelDir, explicitFlag) {
const explicit = (explicitFlag || process.env.SHERPA_ONNX_DATA_DIR || "").trim();
if (explicit) return explicit;
const candidate = path.join(modelDir, "espeak-ng-data");
return fs.existsSync(candidate) ? candidate : "";
}
function resolveBinary(runtimeDir) {
const binName = process.platform === "win32" ? "sherpa-onnx-offline-tts.exe" : "sherpa-onnx-offline-tts";
return path.join(runtimeDir, "bin", binName);
}
function prependEnvPath(current, next) {
if (!next) return current;
if (!current) return next;
return `${next}${path.delimiter}${current}`;
}
const args = process.argv.slice(2);
let runtimeDir = "";
let modelDir = "";
let modelFile = "";
let tokensFile = "";
let dataDir = "";
let output = "tts.wav";
const textParts = [];
for (let i = 0; i < args.length; i += 1) {
const arg = args[i];
if (arg === "--runtime-dir") {
runtimeDir = args[i + 1] || "";
i += 1;
continue;
}
if (arg === "--model-dir") {
modelDir = args[i + 1] || "";
i += 1;
continue;
}
if (arg === "--model-file") {
modelFile = args[i + 1] || "";
i += 1;
continue;
}
if (arg === "--tokens-file") {
tokensFile = args[i + 1] || "";
i += 1;
continue;
}
if (arg === "--data-dir") {
dataDir = args[i + 1] || "";
i += 1;
continue;
}
if (arg === "-o" || arg === "--output") {
output = args[i + 1] || output;
i += 1;
continue;
}
if (arg === "--text") {
textParts.push(args[i + 1] || "");
i += 1;
continue;
}
textParts.push(arg);
}
runtimeDir = resolveRuntimeDir(runtimeDir);
modelDir = resolveModelDir(modelDir);
if (!runtimeDir || !modelDir) {
usage("Missing runtime/model directory.");
}
modelFile = resolveModelFile(modelDir, modelFile);
tokensFile = resolveTokensFile(modelDir, tokensFile);
dataDir = resolveDataDir(modelDir, dataDir);
if (!modelFile || !tokensFile || !dataDir) {
usage(
"Model directory is missing required files. Set SHERPA_ONNX_MODEL_FILE, SHERPA_ONNX_TOKENS_FILE, SHERPA_ONNX_DATA_DIR or pass --model-file/--tokens-file/--data-dir.",
);
}
const text = textParts.join(" ").trim();
if (!text) {
usage("Missing text.");
}
const bin = resolveBinary(runtimeDir);
if (!fs.existsSync(bin)) {
usage(`TTS binary not found: ${bin}`);
}
const env = { ...process.env };
const libDir = path.join(runtimeDir, "lib");
if (process.platform === "darwin") {
env.DYLD_LIBRARY_PATH = prependEnvPath(env.DYLD_LIBRARY_PATH || "", libDir);
} else if (process.platform === "win32") {
env.PATH = prependEnvPath(env.PATH || "", [path.join(runtimeDir, "bin"), libDir].join(path.delimiter));
} else {
env.LD_LIBRARY_PATH = prependEnvPath(env.LD_LIBRARY_PATH || "", libDir);
}
const outputPath = path.isAbsolute(output) ? output : path.join(process.cwd(), output);
fs.mkdirSync(path.dirname(outputPath), { recursive: true });
const child = spawnSync(
bin,
[
`--vits-model=${modelFile}`,
`--vits-tokens=${tokensFile}`,
`--vits-data-dir=${dataDir}`,
`--output-filename=${outputPath}`,
text,
],
{
stdio: "inherit",
env,
},
);
if (typeof child.status === "number") {
process.exit(child.status);
}
if (child.error) {
console.error(child.error.message || String(child.error));
}
process.exit(1);

29
.skills/songsee/SKILL.md Normal file
View File

@@ -0,0 +1,29 @@
---
name: songsee
description: Generate spectrograms and feature-panel visualizations from audio with the songsee CLI.
homepage: https://github.com/steipete/songsee
metadata: {"clawdbot":{"emoji":"🌊","requires":{"bins":["songsee"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/songsee","bins":["songsee"],"label":"Install songsee (brew)"}]}}
---
# songsee
Generate spectrograms + feature panels from audio.
Quick start
- Spectrogram: `songsee track.mp3`
- Multi-panel: `songsee track.mp3 --viz spectrogram,mel,chroma,hpss,selfsim,loudness,tempogram,mfcc,flux`
- Time slice: `songsee track.mp3 --start 12.5 --duration 8 -o slice.jpg`
- Stdin: `cat track.mp3 | songsee - --format png -o out.png`
Common flags
- `--viz` list (repeatable or comma-separated)
- `--style` palette (classic, magma, inferno, viridis, gray)
- `--width` / `--height` output size
- `--window` / `--hop` FFT settings
- `--min-freq` / `--max-freq` frequency range
- `--start` / `--duration` time slice
- `--format` jpg|png
Notes
- WAV/MP3 decode native; other formats use ffmpeg if available.
- Multiple `--viz` renders a grid.

26
.skills/sonoscli/SKILL.md Normal file
View File

@@ -0,0 +1,26 @@
---
name: sonoscli
description: Control Sonos speakers (discover/status/play/volume/group).
homepage: https://sonoscli.sh
metadata: {"clawdbot":{"emoji":"🔊","requires":{"bins":["sonos"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/sonoscli/cmd/sonos@latest","bins":["sonos"],"label":"Install sonoscli (go)"}]}}
---
# Sonos CLI
Use `sonos` to control Sonos speakers on the local network.
Quick start
- `sonos discover`
- `sonos status --name "Kitchen"`
- `sonos play|pause|stop --name "Kitchen"`
- `sonos volume set 15 --name "Kitchen"`
Common tasks
- Grouping: `sonos group status|join|unjoin|party|solo`
- Favorites: `sonos favorites list|open`
- Queue: `sonos queue list|play|clear`
- Spotify search (via SMAPI): `sonos smapi search --service "Spotify" --category tracks "query"`
Notes
- If SSDP fails, specify `--ip <speaker-ip>`.
- Spotify Web API search is optional and requires `SPOTIFY_CLIENT_ID/SECRET`.

View File

@@ -0,0 +1,34 @@
---
name: spotify-player
description: Terminal Spotify playback/search via spogo (preferred) or spotify_player.
homepage: https://www.spotify.com
metadata: {"clawdbot":{"emoji":"🎵","requires":{"anyBins":["spogo","spotify_player"]},"install":[{"id":"brew","kind":"brew","formula":"spogo","tap":"steipete/tap","bins":["spogo"],"label":"Install spogo (brew)"},{"id":"brew","kind":"brew","formula":"spotify_player","bins":["spotify_player"],"label":"Install spotify_player (brew)"}]}}
---
# spogo / spotify_player
Use `spogo` **(preferred)** for Spotify playback/search. Fall back to `spotify_player` if needed.
Requirements
- Spotify Premium account.
- Either `spogo` or `spotify_player` installed.
spogo setup
- Import cookies: `spogo auth import --browser chrome`
Common CLI commands
- Search: `spogo search track "query"`
- Playback: `spogo play|pause|next|prev`
- Devices: `spogo device list`, `spogo device set "<name|id>"`
- Status: `spogo status`
spotify_player commands (fallback)
- Search: `spotify_player search "query"`
- Playback: `spotify_player playback play|pause|next|previous`
- Connect device: `spotify_player connect`
- Like track: `spotify_player like`
Notes
- Config folder: `~/.config/spotify-player` (e.g., `app.toml`).
- For Spotify Connect integration, set a user `client_id` in config.
- TUI shortcuts are available via `?` in the app.

View File

@@ -0,0 +1,67 @@
---
name: summarize
description: Summarize or extract text/transcripts from URLs, podcasts, and local files (great fallback for “transcribe this YouTube/video”).
homepage: https://summarize.sh
metadata: {"clawdbot":{"emoji":"🧾","requires":{"bins":["summarize"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/summarize","bins":["summarize"],"label":"Install summarize (brew)"}]}}
---
# Summarize
Fast CLI to summarize URLs, local files, and YouTube links.
## When to use (trigger phrases)
Use this skill immediately when the user asks any of:
- “use summarize.sh”
- “whats this link/video about?”
- “summarize this URL/article”
- “transcribe this YouTube/video” (best-effort transcript extraction; no `yt-dlp` needed)
## Quick start
```bash
summarize "https://example.com" --model google/gemini-3-flash-preview
summarize "/path/to/file.pdf" --model google/gemini-3-flash-preview
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto
```
## YouTube: summary vs transcript
Best-effort transcript (URLs only):
```bash
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto --extract-only
```
If the user asked for a transcript but its huge, return a tight summary first, then ask which section/time range to expand.
## Model + keys
Set the API key for your chosen provider:
- OpenAI: `OPENAI_API_KEY`
- Anthropic: `ANTHROPIC_API_KEY`
- xAI: `XAI_API_KEY`
- Google: `GEMINI_API_KEY` (aliases: `GOOGLE_GENERATIVE_AI_API_KEY`, `GOOGLE_API_KEY`)
Default model is `google/gemini-3-flash-preview` if none is set.
## Useful flags
- `--length short|medium|long|xl|xxl|<chars>`
- `--max-output-tokens <count>`
- `--extract-only` (URLs only)
- `--json` (machine readable)
- `--firecrawl auto|off|always` (fallback extraction)
- `--youtube auto` (Apify fallback if `APIFY_API_TOKEN` set)
## Config
Optional config file: `~/.summarize/config.json`
```json
{ "model": "openai/gpt-5.2" }
```
Optional services:
- `FIRECRAWL_API_KEY` for blocked sites
- `APIFY_API_TOKEN` for YouTube fallback

View File

@@ -0,0 +1,61 @@
---
name: things-mac
description: Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
homepage: https://github.com/ossianhempel/things3-cli
metadata: {"clawdbot":{"emoji":"✅","os":["darwin"],"requires":{"bins":["things"]},"install":[{"id":"go","kind":"go","module":"github.com/ossianhempel/things3-cli/cmd/things@latest","bins":["things"],"label":"Install things3-cli (go)"}]}}
---
# Things 3 CLI
Use `things` to read your local Things database (inbox/today/search/projects/areas/tags) and to add/update todos via the Things URL scheme.
Setup
- Install (recommended, Apple Silicon): `GOBIN=/opt/homebrew/bin go install github.com/ossianhempel/things3-cli/cmd/things@latest`
- If DB reads fail: grant **Full Disk Access** to the calling app (Terminal for manual runs; `Clawdbot.app` for gateway runs).
- Optional: set `THINGSDB` (or pass `--db`) to point at your `ThingsData-*` folder.
- Optional: set `THINGS_AUTH_TOKEN` to avoid passing `--auth-token` for update ops.
Read-only (DB)
- `things inbox --limit 50`
- `things today`
- `things upcoming`
- `things search "query"`
- `things projects` / `things areas` / `things tags`
Write (URL scheme)
- Prefer safe preview: `things --dry-run add "Title"`
- Add: `things add "Title" --notes "..." --when today --deadline 2026-01-02`
- Bring Things to front: `things --foreground add "Title"`
Examples: add a todo
- Basic: `things add "Buy milk"`
- With notes: `things add "Buy milk" --notes "2% + bananas"`
- Into a project/area: `things add "Book flights" --list "Travel"`
- Into a project heading: `things add "Pack charger" --list "Travel" --heading "Before"`
- With tags: `things add "Call dentist" --tags "health,phone"`
- Checklist: `things add "Trip prep" --checklist-item "Passport" --checklist-item "Tickets"`
- From STDIN (multi-line => title + notes):
- `cat <<'EOF' | things add -`
- `Title line`
- `Notes line 1`
- `Notes line 2`
- `EOF`
Examples: modify a todo (needs auth token)
- First: get the ID (UUID column): `things search "milk" --limit 5`
- Auth: set `THINGS_AUTH_TOKEN` or pass `--auth-token <TOKEN>`
- Title: `things update --id <UUID> --auth-token <TOKEN> "New title"`
- Notes replace: `things update --id <UUID> --auth-token <TOKEN> --notes "New notes"`
- Notes append/prepend: `things update --id <UUID> --auth-token <TOKEN> --append-notes "..."` / `--prepend-notes "..."`
- Move lists: `things update --id <UUID> --auth-token <TOKEN> --list "Travel" --heading "Before"`
- Tags replace/add: `things update --id <UUID> --auth-token <TOKEN> --tags "a,b"` / `things update --id <UUID> --auth-token <TOKEN> --add-tags "a,b"`
- Complete/cancel (soft-delete-ish): `things update --id <UUID> --auth-token <TOKEN> --completed` / `--canceled`
- Safe preview: `things --dry-run update --id <UUID> --auth-token <TOKEN> --completed`
Delete a todo?
- Not supported by `things3-cli` right now (no “delete/move-to-trash” write command; `things trash` is read-only listing).
- Options: use Things UI to delete/trash, or mark as `--completed` / `--canceled` via `things update`.
Notes
- macOS-only.
- `--dry-run` prints the URL and does not open Things.

121
.skills/tmux/SKILL.md Normal file
View File

@@ -0,0 +1,121 @@
---
name: tmux
description: Remote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
metadata: {"clawdbot":{"emoji":"🧵","os":["darwin","linux"],"requires":{"bins":["tmux"]}}}
---
# tmux Skill (Clawdbot)
Use tmux only when you need an interactive TTY. Prefer exec background mode for long-running, non-interactive tasks.
## Quickstart (isolated socket, exec tool)
```bash
SOCKET_DIR="${CLAWDBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/clawdbot-tmux-sockets}"
mkdir -p "$SOCKET_DIR"
SOCKET="$SOCKET_DIR/clawdbot.sock"
SESSION=clawdbot-python
tmux -S "$SOCKET" new -d -s "$SESSION" -n shell
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- 'PYTHON_BASIC_REPL=1 python3 -q' Enter
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
```
After starting a session, always print monitor commands:
```
To monitor:
tmux -S "$SOCKET" attach -t "$SESSION"
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
```
## Socket convention
- Use `CLAWDBOT_TMUX_SOCKET_DIR` (default `${TMPDIR:-/tmp}/clawdbot-tmux-sockets`).
- Default socket path: `"$CLAWDBOT_TMUX_SOCKET_DIR/clawdbot.sock"`.
## Targeting panes and naming
- Target format: `session:window.pane` (defaults to `:0.0`).
- Keep names short; avoid spaces.
- Inspect: `tmux -S "$SOCKET" list-sessions`, `tmux -S "$SOCKET" list-panes -a`.
## Finding sessions
- List sessions on your socket: `{baseDir}/scripts/find-sessions.sh -S "$SOCKET"`.
- Scan all sockets: `{baseDir}/scripts/find-sessions.sh --all` (uses `CLAWDBOT_TMUX_SOCKET_DIR`).
## Sending input safely
- Prefer literal sends: `tmux -S "$SOCKET" send-keys -t target -l -- "$cmd"`.
- Control keys: `tmux -S "$SOCKET" send-keys -t target C-c`.
## Watching output
- Capture recent history: `tmux -S "$SOCKET" capture-pane -p -J -t target -S -200`.
- Wait for prompts: `{baseDir}/scripts/wait-for-text.sh -t session:0.0 -p 'pattern'`.
- Attaching is OK; detach with `Ctrl+b d`.
## Spawning processes
- For python REPLs, set `PYTHON_BASIC_REPL=1` (non-basic REPL breaks send-keys flows).
## Windows / WSL
- tmux is supported on macOS/Linux. On Windows, use WSL and install tmux inside WSL.
- This skill is gated to `darwin`/`linux` and requires `tmux` on PATH.
## Orchestrating Coding Agents (Codex, Claude Code)
tmux excels at running multiple coding agents in parallel:
```bash
SOCKET="${TMPDIR:-/tmp}/codex-army.sock"
# Create multiple sessions
for i in 1 2 3 4 5; do
tmux -S "$SOCKET" new-session -d -s "agent-$i"
done
# Launch agents in different workdirs
tmux -S "$SOCKET" send-keys -t agent-1 "cd /tmp/project1 && codex --yolo 'Fix bug X'" Enter
tmux -S "$SOCKET" send-keys -t agent-2 "cd /tmp/project2 && codex --yolo 'Fix bug Y'" Enter
# Poll for completion (check if prompt returned)
for sess in agent-1 agent-2; do
if tmux -S "$SOCKET" capture-pane -p -t "$sess" -S -3 | grep -q ""; then
echo "$sess: DONE"
else
echo "$sess: Running..."
fi
done
# Get full output from completed session
tmux -S "$SOCKET" capture-pane -p -t agent-1 -S -500
```
**Tips:**
- Use separate git worktrees for parallel fixes (no branch conflicts)
- `pnpm install` first before running codex in fresh clones
- Check for shell prompt (`` or `$`) to detect completion
- Codex needs `--yolo` or `--full-auto` for non-interactive fixes
## Cleanup
- Kill a session: `tmux -S "$SOCKET" kill-session -t "$SESSION"`.
- Kill all sessions on a socket: `tmux -S "$SOCKET" list-sessions -F '#{session_name}' | xargs -r -n1 tmux -S "$SOCKET" kill-session -t`.
- Remove everything on the private socket: `tmux -S "$SOCKET" kill-server`.
## Helper: wait-for-text.sh
`{baseDir}/scripts/wait-for-text.sh` polls a pane for a regex (or fixed string) with a timeout.
```bash
{baseDir}/scripts/wait-for-text.sh -t session:0.0 -p 'pattern' [-F] [-T 20] [-i 0.5] [-l 2000]
```
- `-t`/`--target` pane target (required)
- `-p`/`--pattern` regex to match (required); add `-F` for fixed string
- `-T` timeout seconds (integer, default 15)
- `-i` poll interval seconds (default 0.5)
- `-l` history lines to search (integer, default 1000)

View File

@@ -0,0 +1,112 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'USAGE'
Usage: find-sessions.sh [-L socket-name|-S socket-path|-A] [-q pattern]
List tmux sessions on a socket (default tmux socket if none provided).
Options:
-L, --socket tmux socket name (passed to tmux -L)
-S, --socket-path tmux socket path (passed to tmux -S)
-A, --all scan all sockets under CLAWDBOT_TMUX_SOCKET_DIR
-q, --query case-insensitive substring to filter session names
-h, --help show this help
USAGE
}
socket_name=""
socket_path=""
query=""
scan_all=false
socket_dir="${CLAWDBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/clawdbot-tmux-sockets}"
while [[ $# -gt 0 ]]; do
case "$1" in
-L|--socket) socket_name="${2-}"; shift 2 ;;
-S|--socket-path) socket_path="${2-}"; shift 2 ;;
-A|--all) scan_all=true; shift ;;
-q|--query) query="${2-}"; shift 2 ;;
-h|--help) usage; exit 0 ;;
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
esac
done
if [[ "$scan_all" == true && ( -n "$socket_name" || -n "$socket_path" ) ]]; then
echo "Cannot combine --all with -L or -S" >&2
exit 1
fi
if [[ -n "$socket_name" && -n "$socket_path" ]]; then
echo "Use either -L or -S, not both" >&2
exit 1
fi
if ! command -v tmux >/dev/null 2>&1; then
echo "tmux not found in PATH" >&2
exit 1
fi
list_sessions() {
local label="$1"; shift
local tmux_cmd=(tmux "$@")
if ! sessions="$("${tmux_cmd[@]}" list-sessions -F '#{session_name}\t#{session_attached}\t#{session_created_string}' 2>/dev/null)"; then
echo "No tmux server found on $label" >&2
return 1
fi
if [[ -n "$query" ]]; then
sessions="$(printf '%s\n' "$sessions" | grep -i -- "$query" || true)"
fi
if [[ -z "$sessions" ]]; then
echo "No sessions found on $label"
return 0
fi
echo "Sessions on $label:"
printf '%s\n' "$sessions" | while IFS=$'\t' read -r name attached created; do
attached_label=$([[ "$attached" == "1" ]] && echo "attached" || echo "detached")
printf ' - %s (%s, started %s)\n' "$name" "$attached_label" "$created"
done
}
if [[ "$scan_all" == true ]]; then
if [[ ! -d "$socket_dir" ]]; then
echo "Socket directory not found: $socket_dir" >&2
exit 1
fi
shopt -s nullglob
sockets=("$socket_dir"/*)
shopt -u nullglob
if [[ "${#sockets[@]}" -eq 0 ]]; then
echo "No sockets found under $socket_dir" >&2
exit 1
fi
exit_code=0
for sock in "${sockets[@]}"; do
if [[ ! -S "$sock" ]]; then
continue
fi
list_sessions "socket path '$sock'" -S "$sock" || exit_code=$?
done
exit "$exit_code"
fi
tmux_cmd=(tmux)
socket_label="default socket"
if [[ -n "$socket_name" ]]; then
tmux_cmd+=(-L "$socket_name")
socket_label="socket name '$socket_name'"
elif [[ -n "$socket_path" ]]; then
tmux_cmd+=(-S "$socket_path")
socket_label="socket path '$socket_path'"
fi
list_sessions "$socket_label" "${tmux_cmd[@]:1}"

View File

@@ -0,0 +1,83 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'USAGE'
Usage: wait-for-text.sh -t target -p pattern [options]
Poll a tmux pane for text and exit when found.
Options:
-t, --target tmux target (session:window.pane), required
-p, --pattern regex pattern to look for, required
-F, --fixed treat pattern as a fixed string (grep -F)
-T, --timeout seconds to wait (integer, default: 15)
-i, --interval poll interval in seconds (default: 0.5)
-l, --lines number of history lines to inspect (integer, default: 1000)
-h, --help show this help
USAGE
}
target=""
pattern=""
grep_flag="-E"
timeout=15
interval=0.5
lines=1000
while [[ $# -gt 0 ]]; do
case "$1" in
-t|--target) target="${2-}"; shift 2 ;;
-p|--pattern) pattern="${2-}"; shift 2 ;;
-F|--fixed) grep_flag="-F"; shift ;;
-T|--timeout) timeout="${2-}"; shift 2 ;;
-i|--interval) interval="${2-}"; shift 2 ;;
-l|--lines) lines="${2-}"; shift 2 ;;
-h|--help) usage; exit 0 ;;
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
esac
done
if [[ -z "$target" || -z "$pattern" ]]; then
echo "target and pattern are required" >&2
usage
exit 1
fi
if ! [[ "$timeout" =~ ^[0-9]+$ ]]; then
echo "timeout must be an integer number of seconds" >&2
exit 1
fi
if ! [[ "$lines" =~ ^[0-9]+$ ]]; then
echo "lines must be an integer" >&2
exit 1
fi
if ! command -v tmux >/dev/null 2>&1; then
echo "tmux not found in PATH" >&2
exit 1
fi
# End time in epoch seconds (integer, good enough for polling)
start_epoch=$(date +%s)
deadline=$((start_epoch + timeout))
while true; do
# -J joins wrapped lines, -S uses negative index to read last N lines
pane_text="$(tmux capture-pane -p -J -t "$target" -S "-${lines}" 2>/dev/null || true)"
if printf '%s\n' "$pane_text" | grep $grep_flag -- "$pattern" >/dev/null 2>&1; then
exit 0
fi
now=$(date +%s)
if (( now >= deadline )); then
echo "Timed out after ${timeout}s waiting for pattern: $pattern" >&2
echo "Last ${lines} lines from $target:" >&2
printf '%s\n' "$pane_text" >&2
exit 1
fi
sleep "$interval"
done

84
.skills/trello/SKILL.md Normal file
View File

@@ -0,0 +1,84 @@
---
name: trello
description: Manage Trello boards, lists, and cards via the Trello REST API.
homepage: https://developer.atlassian.com/cloud/trello/rest/
metadata: {"clawdbot":{"emoji":"📋","requires":{"bins":["jq"],"env":["TRELLO_API_KEY","TRELLO_TOKEN"]}}}
---
# Trello Skill
Manage Trello boards, lists, and cards directly from Clawdbot.
## Setup
1. Get your API key: https://trello.com/app-key
2. Generate a token (click "Token" link on that page)
3. Set environment variables:
```bash
export TRELLO_API_KEY="your-api-key"
export TRELLO_TOKEN="your-token"
```
## Usage
All commands use curl to hit the Trello REST API.
### List boards
```bash
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id}'
```
### List lists in a board
```bash
curl -s "https://api.trello.com/1/boards/{boardId}/lists?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id}'
```
### List cards in a list
```bash
curl -s "https://api.trello.com/1/lists/{listId}/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id, desc}'
```
### Create a card
```bash
curl -s -X POST "https://api.trello.com/1/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
-d "idList={listId}" \
-d "name=Card Title" \
-d "desc=Card description"
```
### Move a card to another list
```bash
curl -s -X PUT "https://api.trello.com/1/cards/{cardId}?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
-d "idList={newListId}"
```
### Add a comment to a card
```bash
curl -s -X POST "https://api.trello.com/1/cards/{cardId}/actions/comments?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
-d "text=Your comment here"
```
### Archive a card
```bash
curl -s -X PUT "https://api.trello.com/1/cards/{cardId}?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
-d "closed=true"
```
## Notes
- Board/List/Card IDs can be found in the Trello URL or via the list commands
- The API key and token provide full access to your Trello account - keep them secret!
- Rate limits: 300 requests per 10 seconds per API key; 100 requests per 10 seconds per token; `/1/members` endpoints are limited to 100 requests per 900 seconds
## Examples
```bash
# Get all boards
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN&fields=name,id" | jq
# Find a specific board by name
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | select(.name | contains("Work"))'
# Get all cards on a board
curl -s "https://api.trello.com/1/boards/{boardId}/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, list: .idList}'
```

View File

@@ -0,0 +1,29 @@
---
name: video-frames
description: Extract frames or short clips from videos using ffmpeg.
homepage: https://ffmpeg.org
metadata: {"clawdbot":{"emoji":"🎞️","requires":{"bins":["ffmpeg"]},"install":[{"id":"brew","kind":"brew","formula":"ffmpeg","bins":["ffmpeg"],"label":"Install ffmpeg (brew)"}]}}
---
# Video Frames (ffmpeg)
Extract a single frame from a video, or create quick thumbnails for inspection.
## Quick start
First frame:
```bash
{baseDir}/scripts/frame.sh /path/to/video.mp4 --out /tmp/frame.jpg
```
At a timestamp:
```bash
{baseDir}/scripts/frame.sh /path/to/video.mp4 --time 00:00:10 --out /tmp/frame-10s.jpg
```
## Notes
- Prefer `--time` for “what is happening around here?”.
- Use a `.jpg` for quick share; use `.png` for crisp UI frames.

View File

@@ -0,0 +1,81 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat >&2 <<'EOF'
Usage:
frame.sh <video-file> [--time HH:MM:SS] [--index N] --out /path/to/frame.jpg
Examples:
frame.sh video.mp4 --out /tmp/frame.jpg
frame.sh video.mp4 --time 00:00:10 --out /tmp/frame-10s.jpg
frame.sh video.mp4 --index 0 --out /tmp/frame0.png
EOF
exit 2
}
if [[ "${1:-}" == "" || "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
fi
in="${1:-}"
shift || true
time=""
index=""
out=""
while [[ $# -gt 0 ]]; do
case "$1" in
--time)
time="${2:-}"
shift 2
;;
--index)
index="${2:-}"
shift 2
;;
--out)
out="${2:-}"
shift 2
;;
*)
echo "Unknown arg: $1" >&2
usage
;;
esac
done
if [[ ! -f "$in" ]]; then
echo "File not found: $in" >&2
exit 1
fi
if [[ "$out" == "" ]]; then
echo "Missing --out" >&2
usage
fi
mkdir -p "$(dirname "$out")"
if [[ "$index" != "" ]]; then
ffmpeg -hide_banner -loglevel error -y \
-i "$in" \
-vf "select=eq(n\\,${index})" \
-vframes 1 \
"$out"
elif [[ "$time" != "" ]]; then
ffmpeg -hide_banner -loglevel error -y \
-ss "$time" \
-i "$in" \
-frames:v 1 \
"$out"
else
ffmpeg -hide_banner -loglevel error -y \
-i "$in" \
-vf "select=eq(n\\,0)" \
-vframes 1 \
"$out"
fi
echo "$out"

42
.skills/wacli/SKILL.md Normal file
View File

@@ -0,0 +1,42 @@
---
name: wacli
description: Send WhatsApp messages to other people or search/sync WhatsApp history via the wacli CLI (not for normal user chats).
homepage: https://wacli.sh
metadata: {"clawdbot":{"emoji":"📱","requires":{"bins":["wacli"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/wacli","bins":["wacli"],"label":"Install wacli (brew)"},{"id":"go","kind":"go","module":"github.com/steipete/wacli/cmd/wacli@latest","bins":["wacli"],"label":"Install wacli (go)"}]}}
---
# wacli
Use `wacli` only when the user explicitly asks you to message someone else on WhatsApp or when they ask to sync/search WhatsApp history.
Do NOT use `wacli` for normal user chats; Clawdbot routes WhatsApp conversations automatically.
If the user is chatting with you on WhatsApp, you should not reach for this tool unless they ask you to contact a third party.
Safety
- Require explicit recipient + message text.
- Confirm recipient + message before sending.
- If anything is ambiguous, ask a clarifying question.
Auth + sync
- `wacli auth` (QR login + initial sync)
- `wacli sync --follow` (continuous sync)
- `wacli doctor`
Find chats + messages
- `wacli chats list --limit 20 --query "name or number"`
- `wacli messages search "query" --limit 20 --chat <jid>`
- `wacli messages search "invoice" --after 2025-01-01 --before 2025-12-31`
History backfill
- `wacli history backfill --chat <jid> --requests 2 --count 50`
Send
- Text: `wacli send text --to "+14155551212" --message "Hello! Are you free at 3pm?"`
- Group: `wacli send text --to "1234567890-123456789@g.us" --message "Running 5 min late."`
- File: `wacli send file --to "+14155551212" --file /path/agenda.pdf --caption "Agenda"`
Notes
- Store dir: `~/.wacli` (override with `--store`).
- Use `--json` for machine-readable output when parsing.
- Backfill requires your phone online; results are best-effort.
- WhatsApp CLI is not needed for routine user chats; its for messaging other people.
- JIDs: direct chats look like `<number>@s.whatsapp.net`; groups look like `<id>@g.us` (use `wacli chats list` to find).

49
.skills/weather/SKILL.md Normal file
View File

@@ -0,0 +1,49 @@
---
name: weather
description: Get current weather and forecasts (no API key required).
homepage: https://wttr.in/:help
metadata: {"clawdbot":{"emoji":"🌤️","requires":{"bins":["curl"]}}}
---
# Weather
Two free services, no API keys needed.
## wttr.in (primary)
Quick one-liner:
```bash
curl -s "wttr.in/London?format=3"
# Output: London: ⛅️ +8°C
```
Compact format:
```bash
curl -s "wttr.in/London?format=%l:+%c+%t+%h+%w"
# Output: London: ⛅️ +8°C 71% ↙5km/h
```
Full forecast:
```bash
curl -s "wttr.in/London?T"
```
Format codes: `%c` condition · `%t` temp · `%h` humidity · `%w` wind · `%l` location · `%m` moon
Tips:
- URL-encode spaces: `wttr.in/New+York`
- Airport codes: `wttr.in/JFK`
- Units: `?m` (metric) `?u` (USCS)
- Today only: `?1` · Current only: `?0`
- PNG: `curl -s "wttr.in/Berlin.png" -o /tmp/weather.png`
## Open-Meteo (fallback, JSON)
Free, no key, good for programmatic use:
```bash
curl -s "https://api.open-meteo.com/v1/forecast?latitude=51.5&longitude=-0.12&current_weather=true"
```
Find coordinates for a city, then query. Returns JSON with temp, windspeed, weathercode.
Docs: https://open-meteo.com/en/docs