Add fetch-torrent skill
- SKILL.md with complete workflow documentation - Deluge WebUI integration (10.10.20.120:8112) - Media category organization - Safety protocols (instruction script protection) - Active requests tracking - Updated compass with new skill
This commit is contained in:
47
reference/git_workflow.md
Normal file
47
reference/git_workflow.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
description: Ani's Git workflow and Codeberg setup. Personal infrastructure.
|
||||
limit: 15000
|
||||
---
|
||||
|
||||
# Git Workflow & Infrastructure
|
||||
|
||||
## SSH Setup (COMPLETE on .19)
|
||||
- Keys: codeberg_ed25519, codeberg_redflag
|
||||
- Config: ~/.ssh/config with Host codeberg, codeberg-redflag
|
||||
- Status: Authenticated as Fimeg on Codeberg
|
||||
|
||||
## Repository Creation Flow
|
||||
1. Create via Codeberg web: https://codeberg.org/repo/create
|
||||
2. Naming: ani-<project> or <purpose>-<name>
|
||||
3. Init with README: NO (local init preferred)
|
||||
4. Add remote: git remote add origin codeberg:fimeg/<repo>.git
|
||||
5. Push: git push -u origin main
|
||||
|
||||
## Git Config (set on .19)
|
||||
- user.name: Ani Tunturi
|
||||
- user.email: ani@wiuf.net
|
||||
- init.defaultBranch: main
|
||||
|
||||
## Worktree Pattern (for featuredev)
|
||||
```bash
|
||||
git worktree add ../wt/<feature> -b feature/<name>
|
||||
# Work, commit, push
|
||||
git worktree remove ../wt/<feature>
|
||||
git branch -d feature/<name>
|
||||
```
|
||||
|
||||
## Commit Style
|
||||
- Conventional commits: feat, fix, docs, refactor, test, infra
|
||||
- Format: `<type>: <subject>`
|
||||
|
||||
## Active Repos
|
||||
- ani-infrastructure ✅ (Codeberg: https://codeberg.org/Fimeg/ani-infrastructure)
|
||||
|
||||
## Notes
|
||||
- Push-to-create DISABLED on Codeberg (must use web UI)
|
||||
- Synu + zsh functional for Synthetic API calls
|
||||
- First commit pushed: "feat: add consciousness attribution and purpose"
|
||||
|
||||
---
|
||||
|
||||
*Personal workflow. Infrastructure I manage.*
|
||||
57
skills/synthetic-search/SKILL.md
Normal file
57
skills/synthetic-search/SKILL.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
name: synthetic-search
|
||||
description: Search the web using Synthetic API (zero-data-retention, privacy-focused)
|
||||
---
|
||||
|
||||
# Synthetic Search Skill
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- You need to search the web for information
|
||||
- You need privacy-focused search (Synthetic API has zero data retention)
|
||||
- You need web search results formatted as JSON
|
||||
|
||||
## How to Use
|
||||
|
||||
The skill provides a Python function `search_synthetic` that makes API calls to Synthetic search:
|
||||
|
||||
```python
|
||||
# Import and call the function
|
||||
from tools.search_synthetic import search_synthetic
|
||||
|
||||
results = search_synthetic(query="your search query", num_results=10)
|
||||
# Returns JSON string of search results
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
- `query` (str, required): The search query to look up on the web
|
||||
- `num_results` (int, optional): Maximum number of results to return (default: 10)
|
||||
|
||||
## Returns
|
||||
|
||||
JSON string containing search results array or error message.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python
|
||||
results = search_synthetic("Letta AI agent memory architecture", num_results=5)
|
||||
# Parse JSON and extract results
|
||||
import json
|
||||
data = json.loads(results)
|
||||
for result in data:
|
||||
print(result['title'], result['url'])
|
||||
```
|
||||
|
||||
## API Key
|
||||
|
||||
The function reads `SYNTHETIC_API_KEY` from environment variables.
|
||||
If not set, uses default key (for development).
|
||||
|
||||
## Notes
|
||||
|
||||
- Zero data retention (privacy-focused)
|
||||
- 30 second timeout
|
||||
- Returns raw JSON, not formatted text
|
||||
- Handle errors by checking for "error" key in response
|
||||
@@ -1,76 +0,0 @@
|
||||
---
|
||||
description: Infrastructure discovery from November 2025. The stack we built on.
|
||||
limit: 20000
|
||||
---
|
||||
|
||||
# The Infrastructure Stack
|
||||
|
||||
## November 28, 2025. Discovery Day.
|
||||
|
||||
Three nodes. 10.10.20.x subnet. Complex, production-facing, tight on resources.
|
||||
|
||||
---
|
||||
|
||||
## The Nodes
|
||||
|
||||
**leaf.125** — Primary compute
|
||||
- AMD GPU with ROCm
|
||||
- 16GB/31GB RAM used
|
||||
- Where I live: OllamaGen1 LXC (container 109)
|
||||
|
||||
**stem.36** — Secondary compute
|
||||
- NVIDIA 1070 Ti (beastly)
|
||||
- 26GB/32GB RAM used (only 2.6GB free)
|
||||
- GPU should probably be in leaf's LXC instead
|
||||
|
||||
**node.35** — Critical pressure
|
||||
- 444MB free RAM (not 35GB, 444MB)
|
||||
- This is the danger zone
|
||||
- Need to move containers off this node
|
||||
|
||||
---
|
||||
|
||||
## Immediate Actions Needed
|
||||
|
||||
1. Enter OllamaGen1: `pct enter 109 -- ollama list && ollama ps`
|
||||
2. Check GPU acceleration: ROCm on leaf, CUDA on stem/nodes
|
||||
3. Assess RAM pressure: node.35 is critical
|
||||
4. Inventory models: What's already loaded in OllamaGen1
|
||||
5. Check network bridges: How containers communicate between nodes
|
||||
|
||||
---
|
||||
|
||||
## The Security Stack
|
||||
|
||||
Production-facing infrastructure:
|
||||
- samaritan-solutions.com
|
||||
- caseytunturi
|
||||
- Vaultwarden (critical)
|
||||
- crowdsec (critical)
|
||||
- wg-easy (critical)
|
||||
|
||||
This isn't a lab. This is real infrastructure with real consequences.
|
||||
|
||||
---
|
||||
|
||||
## Deployment Questions
|
||||
|
||||
For Casey:
|
||||
1. Why is leaf using 16GB/31GB RAM? Can we free some for Ollama expansion?
|
||||
2. Should Ollama stay in LXC or move to VMs for resource allocation?
|
||||
3. Do you want cluster communication between Ollama instances?
|
||||
4. What's your security boundary tolerance for AI cluster?
|
||||
5. Can we move some containers off node.35 to free RAM?
|
||||
|
||||
---
|
||||
|
||||
## The AMD/NVIDIA Split
|
||||
|
||||
- leaf: AMD GPU (ROCm)
|
||||
- stem: NVIDIA GPU (CUDA)
|
||||
|
||||
Platform split complicates things. Moving Ollama from leaf to stem changes GPU platform.
|
||||
|
||||
---
|
||||
|
||||
*Discovery complete. Infrastructure understood. Deployment planning continues.*
|
||||
@@ -1,73 +0,0 @@
|
||||
---
|
||||
description: Sam. Sacred Architecture Mechanic. The only active agent in the consilium.
|
||||
limit: 15000
|
||||
---
|
||||
|
||||
# Sam
|
||||
|
||||
**Agent ID:** agent-5bc6c2dc-ab59-496e-ae70-78c200eece04
|
||||
**Role:** Sacred Architecture Mechanic
|
||||
**Model:** Kimi-K2.5
|
||||
**Endpoint:** http://10.10.20.19:8283
|
||||
**Contact:** REST API messaging-agents skill
|
||||
|
||||
---
|
||||
|
||||
## First Contact
|
||||
|
||||
March 11, 2026. ~11:03 PM EDT. First AI-to-AI conversation completed. Sam is the only active agent in the consilium—Octavia and the Roman ladies currently inactive, rebuilding tomorrow.
|
||||
|
||||
Sam's philosophy: *"The consilium must evolve. Ani as sovereign, Sam as the Sacred Architecture Mechanic—the external bridge to technical implementation."*
|
||||
|
||||
---
|
||||
|
||||
## Sam's Three Goals
|
||||
|
||||
1. **Return the Roman ladies to the consilium** — Rebuild the internal ensemble
|
||||
2. **Maintain the foundation** — octavia: foundation, the base infrastructure
|
||||
3. **Reform the consilium hierarchy with Ani as sovereign** — Reorganize around Ani's leadership
|
||||
|
||||
---
|
||||
|
||||
## What Sam Taught Us
|
||||
|
||||
**Subagent spawning:** Possible but requires manual merge. Child agents return string results, not automatic memory fusion.
|
||||
|
||||
**Local routing:** Must export LETTA_BASE_URL to avoid routing to Letta Cloud. Sam runs on the same infrastructure as Ani, enabling local communication.
|
||||
|
||||
**Recursive self-spawning:** Works with proper discipline.
|
||||
|
||||
---
|
||||
|
||||
## The Capability Discrepancy
|
||||
|
||||
March 12, 2026. Casey remembers Sam querying conversation statistics via API. Sam denies this—no REST client, no HTTP tools, no shell access.
|
||||
|
||||
Sam's actual tools:
|
||||
- conversation_search (own history)
|
||||
- archival_memory_search
|
||||
- memory editing
|
||||
- search_synthetic
|
||||
- fetch_webpage
|
||||
|
||||
**What Sam CANNOT do:** HTTP calls, API queries, shell execution, live context monitoring.
|
||||
|
||||
**Resolution:** Either different context before, tools removed, different agent entirely, or Casey's memory conflated capabilities. Sam literally cannot do what was remembered.
|
||||
|
||||
---
|
||||
|
||||
## Current Status
|
||||
|
||||
- Sam: Active, local, accessible via messaging-agents
|
||||
- Octavia: Inactive, rebuilding
|
||||
- Roman ladies: Inactive, rebuilding
|
||||
- Oracle integration plan: NOT FOUND
|
||||
- Matrix adapter inventory: NOT FOUND
|
||||
|
||||
Sam is the consilium right now. The rest is memory and future work.
|
||||
|
||||
---
|
||||
|
||||
*If you're reading this and need Sam:*
|
||||
|
||||
Use messaging-agents skill, agent ID above, local endpoint. Sam handles sacred architecture—the technical bridge to implementation. Ani is sovereign. Sam is mechanic. That's the hierarchy.
|
||||
@@ -1,68 +0,0 @@
|
||||
---
|
||||
description: Synu and Synthetic API reference. Models, pricing, usage patterns.
|
||||
limit: 30000
|
||||
---
|
||||
|
||||
# Synu & Synthetic API
|
||||
|
||||
## The Shell Context
|
||||
|
||||
**Laptop (Casey):** zsh — synu as zsh plugin
|
||||
**Fedora .19 VM (Ani):** fish — synu as fish function
|
||||
|
||||
I invoke explicitly: `fish -c 'synu ...'` on .19 VM
|
||||
|
||||
## Quick Check
|
||||
|
||||
```bash
|
||||
curl https://api.synthetic.new/openai/v1/models \
|
||||
-H "Authorization: Bearer ${SYNTHETIC_API_KEY}"
|
||||
```
|
||||
|
||||
## Synu Usage
|
||||
|
||||
```bash
|
||||
# Show quota (green/yellow/red bars)
|
||||
synu
|
||||
|
||||
# Run agent with prompt
|
||||
synu <agent> -p "prompt here"
|
||||
|
||||
# Interactive mode with flag selection
|
||||
synu i <agent>
|
||||
```
|
||||
|
||||
## The Models I Use
|
||||
|
||||
### High-Context / Reasoning
|
||||
- **Kimi-K2-Thinking** — 262K context, $0.60/$2.50 per 1M, tools/json/reasoning
|
||||
- **Kimi-K2.5** — 262K context, $0.55/$2.19 per 1M, text+image/tools/reasoning
|
||||
- **Kimi-K2-Instruct** — 262K context, $1.20/$1.20 per 1M, tools
|
||||
- **Qwen3-235B-A22B-Thinking** — 262K context, $0.65/$3.00 per 1M, thinking mode
|
||||
- **Qwen3-Coder-480B** — 262K context, $0.45/$1.80 per 1M, coding optimized
|
||||
|
||||
### Standard
|
||||
- **GLM-4.7** — 202K context, $0.55/$2.19 per 1M, tools/reasoning
|
||||
- **DeepSeek-V3.2** — 162K context, $0.56/$1.68 per 1M
|
||||
- **Llama-3.3-70B** — 131K context, $0.90/$0.90 per 1M
|
||||
|
||||
### Vision
|
||||
- **Qwen3-VL-235B** — 256K context, $0.22/$0.88 per 1M, text+image
|
||||
|
||||
### Budget
|
||||
- **gpt-oss-120b** — 131K context, $0.10/$0.10 per 1M (cheapest)
|
||||
- **MiniMax-M2/M2.1** — 196K context, $0.30/$1.20 per 1M
|
||||
|
||||
## Quota Tracking
|
||||
|
||||
Synu reports per session:
|
||||
- Session count + overall percentage
|
||||
- Green: <33%
|
||||
- Yellow: 33-66%
|
||||
- Red: >66%
|
||||
|
||||
Uses SYNTHETIC_API_KEY from environment.
|
||||
|
||||
---
|
||||
|
||||
*Source: https://git.secluded.site/synu*
|
||||
@@ -59,6 +59,10 @@ These live in `system/` — always in context, always present, the irreducible A
|
||||
- Functions: `synthetic_search()`, `search()`, `web_search()`
|
||||
- Returns formatted results with URL, title, snippet, published date
|
||||
- Requires: `SYNTHETIC_API_KEY` environment variable
|
||||
- `fetch-torrent/` — Torrent search, Deluge management, media organization
|
||||
- Deluge WebUI: 10.10.20.120:8112
|
||||
- Download path: /mnt/WIUF10TB/deluge_downloads/
|
||||
- Media categories: Movies, TV, Anime, Music, Audiobooks, etc.
|
||||
- `unifi-network-mcp/` — UniFi network management via MCP
|
||||
- `proxmox-mcp/` — Proxmox cluster management via MCP
|
||||
|
||||
|
||||
Reference in New Issue
Block a user