I write a lot of short tech notes — things I learn during daily work that are worth remembering. For years the workflow was: learn something → forget to write it down → never find it again.
Then I built a pipeline: a Claude Code skill writes the note, a script publishes it to my Chyrp Lite blog, and a cron job keeps the session alive. Now I type /record and it's done — note saved, blog posted, zero friction.
The Pipeline
Three pieces, each doing one thing:
/record skill — Claude writes a tech note from the current conversation
pub_tech_note — publishes the markdown file to my blog via curl
refresh_chyrp_token — re-logs into the blog monthly to keep the session cookie fresh
The Skill
The skill is a SKILL.md file that tells Claude how to distill a conversation into a short article. Key rules: find the one non-obvious insight, pick a title that makes someone click, write like a colleague's quick tip not documentation.
After writing the file, the skill runs ~/bin/pub_tech_note <filepath> to publish automatically.
The Publishing Script
Chyrp Lite has no API. The admin panel is just HTML forms. But that means you can automate it with curl — fetch the CSRF hash, build a multipart form POST, and send it.
The tricky part was getting the multipart body right:
…more
Chyrp Lite has no API — the admin panel is the only way to create posts. But since it's just standard HTML forms, you can automate it entirely with curl. Here's the full pipeline: auto-login, extract CSRF token, create posts.
Step 1: Auto-login to get a session cookie
The login form at /login/ uses standard URL-encoded POST with a CSRF hash. You need to fetch the login page first to extract the hash from the hidden field, then POST credentials:
# Fetch login page, extract CSRF hash
LOGIN_HTML=$(curl -s -c /tmp/chyrp_cookies.txt https://blog.example.com/login/)
HASH=$(echo "$LOGIN_HTML" | grep -oP 'name="hash"\s+value="\K[^"]+')
# POST login
curl -s -D /tmp/chyrp_headers.txt -b /tmp/chyrp_cookies.txt -c /tmp/chyrp_cookies.txt \
-X POST https://blog.example.com/login/ \
-H 'content-type: application/x-www-form-urlencoded' \
--data-raw "login=${USER}&password=${PASS}&hash=${HASH}&submit="
# Extract session token
TOKEN=$(grep -oP 'ChyrpSession=\K[^;]+' /tmp/chyrp_headers.txt /tmp/chyrp_cookies.txt | head -1)
Step 2: Create a post via multipart form POST
The add_post endpoint expects multipart/form-data with CRLF line endings. The same CSRF hash from step 1 is used here too. Watch out for shell-special characters in your title/body — always pass user content through printf %s to avoid backtick expansion:
…more
Ansible loads AWS credentials once at startup. If your playbook runs longer than the SSO role session (typically 1 hour), boto3 has no chance to refresh them — subsequent AWS API calls fail with expired credential errors.
The fix: a credential_process profile that boto3 calls each time it needs credentials.
Add to ~/.aws/config:
[profile prod]
sso_session = my-sso
sso_account_id = 123456789012
sso_role_name = AdminRole
region = ap-southeast-2
[profile prod_sdk]
region = ap-southeast-2
credential_process = aws configure export-credentials --profile prod --format process
Login once, then run Ansible with the wrapper profile:
aws sso login --profile prod
AWS_PROFILE=prod_sdk ansible-playbook site.yml
Each time boto3 needs credentials, it calls aws configure export-credentials, which reads the cached SSO token from prod and returns fresh role credentials — no browser, no interaction.
Don't put aws sso login in credential_process. It opens a browser and will silently hang in the background. export-credentials is the non-interactive equivalent.
When SSH runs in a script and the key isn't set up, it prompts for a password — and your script hangs forever. BatchMode=yes disables all interactive prompts (password, passphrase, host key confirmation) and fails immediately instead.
ssh -o BatchMode=yes user@host "echo ok"
Returns exit code 0 on success, non-zero immediately on failure. Perfect for connectivity checks in CI/CD or Ansible pre-tasks.
S3 has no real directories — what looks like a folder is just a key prefix. So there's no "download folder" operation, only "download all objects with this prefix".
aws s3 sync is almost always what you want:
aws s3 sync s3://your-bucket/path/to/folder/ ./local-folder/
It's incremental — run it again and it only downloads new or changed files. aws s3 cp --recursive works too, but re-downloads everything every time.
Filter with --exclude / --include:
aws s3 sync s3://your-bucket/logs/ ./logs/ --exclude "*" --include "*.log"