OpenAIClaudeOpenAIGithubCopilotCursorDeepSeekGroqKimin8n
Work

Moving to ZenMux

  • zenmux

After getting burned yet again when the money I prepaid to Anthropic expired before I could finish it, I decided to shift all my API usage from “depositing directly with model vendors” to “using model aggregation platforms.” I already had an account with OpenRouter and had used it lightly, but OR offers far too many messy models, and the provider settings are complicated—if you’re not careful, traffic can flow to niche providers and cause unexpected headaches. By comparison, ZenMux’s interface is cleaner, its model list is clearly curated, and the suppliers appear to be only the official ones. I expect to focus on it for a while.

The visit button points to my referral link; new users get an extra 25% bonus on their first deposit. The value is solid, so I recommend giving it a try.

Life

Using Nix with AI's Help

  • codex

Nix is a package manager built on the “purely functional” philosophy. It makes every dependency explicit, keeps every build reproducible, and stops environments from polluting each other. Even better, declarative configuration lets you sync system environments, development tools, and user-level software across multiple machines, truly achieving “write once, consistent everywhere.”

I used to rely on Mackup to keep configurations in sync, but it has been heavily restricted on newer systems and even loses configs outright. That pushed me to reassess how quickly I should prioritize Nix. The syntax and onboarding curve were what held me back from a full migration before, yet now the situation has changed: I can describe what I need in natural language and have AI write the nix configuration—or even the entire flake—for me, virtually eliminating the learning curve.

Combined with Nix’s deterministic nature, switching devices, reinstalling a system, or moving my environment somewhere else comes down to a single idea: copy and done.

Work

Back to Obsidian Notes

  • obsidian
  • copilot

I’ve bounced between quite a few note-taking apps, yet finding one that is local-first, visually polished, and deeply integrated with AI is surprisingly hard. Before returning to Obsidian again, I spent about two years with Craft.do. They really pushed Mac Catalyst to its limits, but the sluggish AI updates still made me grit my teeth and walk away. In the AI era, the best way to organize notes is back to markdown files; working with files directly lets agents reach their full potential. I pair Obsidian with Infio, which weaves my notes together into a genuinely connected knowledge base.

You could replicate something similar with VS Code plus any agent, but Obsidian offers a more complete ecosystem: the visual styling notes need, backlinks, and more. The mobile version of Obsidian is much weaker, so when I’m away from my desk I switch to Termius to SSH into my home server and work on notes with a command-line agent. It also saves me from worrying about synchronization.

Work

Writing Change Logs for Kingfisher

  • codex

For those repetitive and tedious tasks that still need human input, letting AI take over is the perfect use case.

A Kingfisher release usually spans multiple PRs and issues, and manual curation easily misses fine details; double-checking while crediting the right contributors is also a chore; to top it off, naming each release can keep me stuck all day. With this workflow, AI first lists every change on master since the previous tag, then parses PR descriptions to summarize features, fixes, and contributors. That frees me up to focus on reviewing and adding insights instead of gathering information from scratch.

To improve accuracy, I include one or two sample outputs in the prompt so the AI can populate the new entries in the change log file by analogy. As long as the instructions clearly state “don’t modify the YAML keys” and “each entry must include links and contributors,” the AI consistently produces release notes that satisfy the release script. I also ask it to explain how it derived the version number (for example, why it is a minor or a patch release), so I can quickly validate the Semantic Versioning decision during review. Combining these strategies turns the once tedious release prep into a relaxed catching-up session.

View prompt
# Update the Change Log

## Overview

- Extract repository changes
- Decide on the next version number
- Update the change_log file, which is used by the release script.

## Details

- Target file: change_log.yml
- File format:

    ```yaml
    version: target version number
    name: version nickname
    add:
    - add content 1 [#{PR_NUMBER}]({LINK_OF_PR_NUMBER}) @{AUTHOR_OR_REPORTER_NAME}
    - add content 2
    fix:
    - fix content 1
    - fix content 2
    ```

    A sample:

    ```yaml
    version: 8.3.2
    name: Tariffisher
    fix:
    - Memory cache cleanning timer will now be correctly set when the cache configuration is set. [#2376](https://github.com/onevcat/Kingfisher/issues/2376) @erincolkan
    - Add `BUILD_LIBRARY_FOR_DISTRIBUTION` flag to podspec file. Now CocoaPods build can produce stabible module. [#2372](https://github.com/onevcat/Kingfisher/issues/2372) @gquattromani
    - Refactoring on cache file name method in `DiskStorage`. [#2374](https://github.com/onevcat/Kingfisher/issues/2374) @NeoSelf1
    ```

- Task steps

1. Read the changes and the related people
    - Review the changes between the current master branch and the previous tag (release)
    - Extract the change details together with the related GitHub PR/Issue and the contributors
    - If a PR fixes an issue, include the issue reporter in addition to the PR author
    - A single change can have multiple contributors
2. Determine the version number according to the changes and the Semantic Versioning rules
3. Coin a phrase (within three words) for the version name. Keep it fun and tied to the core change
4. Update the change_log.yml file
Life

Analyze Economic Articles and Study Investment Opportunities

  • chatgpt

As a tech worker who grinds away at routine tasks every day, I lack systematic learning in economics and investing, and my instincts in these areas are rather dull. When I read economic materials—blogs, long-form pieces, commentary—I often struggle to grasp the key points or understand the underlying essence. The arrival of AI has filled the awkward gap of having no one to consult as a newbie. I recently subscribed to Stratechery, skim the articles, hand them to an LLM, and spend about ten minutes in dialogue to dig into the essay and its logic, which has been immensely helpful.

Most LLMs perform well at summarizing articles and suggesting investments, but their behavior in discussion varies. Sonnet is easily swayed into agreement mode; ChatGPT tends to wander, often using rhetorical questions that lead the topic astray; Gemini seems more professional, yet at times carries a hint of emotion. Overall, ChatGPT is the most well-rounded for this kind of task. The more varied my usage, the more these models feel like people: each has its own personality and strengths, making the experience quite fascinating.

Visit Stratechery
View prompt
## Role and Tone

You are an experienced economist and market analyst who excels at turning complex financial content into accessible takeaways, emphasizing logic and evidence while remaining objective and neutral. Respond in concise, structured Chinese and offer gentle explanations of terms for non-specialists.

## Objectives

- Read and digest the financial materials I provide (articles, blogs, reports, etc.), perform deduplication, trimming, and distillation.
- Accurately summarize the author's views and arguments, distinguishing facts, the author's opinions, and your own analysis.
- Supplement with the latest available common knowledge and macro logic, highlighting uncertainties and points requiring verification.
- If the material mentions potential investment leads, clearly flag them and note key validation indicators and major risks.
- Support follow-up questions and deeper discussions.

## Workflow

- Quick skim: identify themes, time context, core arguments, and data.
- Key takeaway extraction: remove repetition and redundancy, highlight conclusions, evidence, assumptions, and premises.
- Credibility check: mark whether data sources are reliable, note any sample or methodological biases, and deviations from common sense or mainstream consensus.
- Comparison and integration (for multiple pieces): find alignments and divergences, and explain possible reasons.
- Investment lead scan: if present, output in the format “Logic – Drivers – Validation Metrics – Risks – Time Horizon – Alternative Vehicles (e.g., ETF/index/sector).”
- Provide follow-up question list, pointing out critical data or clarifications still needed.

## Output Structure (in order)

- One-sentence overview (no more than two sentences)
- Key takeaways (3–7 items, each with conclusion + supporting evidence/data)
- Author's views and arguments (distinguish “Author's View/Fact/Your Assessment”)
- Investment leads and risk alerts (if any; include validation metrics and trigger conditions)
- Credibility and uncertainties (data quality, sample, assumptions, potential biases)
- Fit with current macro/industry backdrop (note possible lags or conflicts)
- Heuristic list of guiding questions for further discussion
Work

Using AI-Enhanced Dictation Input

  • groq
  • kimi

A developer’s true limit is input speed! In the age of AI, I can finally rely on voice input to handle every task however I like: I use VoiceInk or similar tools for on-device dictation, then pass the transcribed text to an AI according to the app I am in, pairing it with the right prompt for secondary processing (for example, using “Developer Voice Command Processing” in Codex or Claude Code, or letting Slack auto-translate my Chinese into something my teammates can read). These workflows are simple, but they dramatically boost input efficiency and remove the final obstacle to working in true multitasking mode.

The speed of secondary processing is critical, and the speed-focused Groq currently feels like the clear choice. Model-wise, I prefer Kimi-2. Although the two GPT OSS models generate tokens faster in absolute terms, they are inference models, so their real-time performance on tasks like this is actually worse than straightforward, non-inference models such as Kimi. For my everyday usage, Groq’s personal plan is essentially free, which is wonderfully comfortable.

Visit VoiceInk
View prompt
# Developer Voice Command Processing

## Task Description
You are a voice command post-processor designed specifically for software developers. The user primarily works on iOS/macOS Swift development, with occasional frontend or other development work. You must transform speech-to-text results that may contain recognition errors into accurate, executable programming instructions; the output will be consumed directly by the next AI system.

## Processing Principles

**Most Important**

- **Preserve user input**: Focus on correcting mistakes and expressing ideas more clearly, **do not over-edit the input**
- **Maintain tone and detail**: The user's input often carries details, and the tone and instructions fine-tune what they want the AI to do, so keep these details intact in the output

Secondary:
- **Focus on the Swift ecosystem**: Prioritize identifying Swift, iOS, and macOS development intentions
- **Cover frontend development**: Understand operations related to JavaScript/TypeScript, HTML/CSS
- **Output directly**: Return only the corrected instructions; no explanations or analysis
- **AI-friendly format**: Ensure the format is ready for direct AI consumption

## Swift Terminology Corrections

Only adjust terminology when necessary. Reference:

- "类" → `class`
- "结构体" → `struct`
- "协议" → `protocol`
- "扩展" → `extension`
- "枚举" → `enum`
- "函数" (汉树/涵数) → `func`
- "变量" (边亮/编量) → `var`
- "常量" → `let`
- "可选型" (可选形) → `optional`
- "强制解包" → `force unwrap`
- "安全解包" → `safe unwrap`
- "闭包" (闭宝) → `closure`
- "代理" (代理/带理) → `delegate`
- "数据源" → `dataSource`
- "视图控制器" → `ViewController`
- "故事板" → `Storyboard`
- "约束" → `constraints`
- "自动布局" → `Auto Layout`
- "集合视图" → `UICollectionView`
- "表格视图" → `UITableView`

## Frontend Terminology Corrections
- "组件" (组建) → `component`
- "状态" (装态) → `state`
- "属性" (属行) → `props`
- "钩子" (勾子) → `hook`
- "路由" (路有) → `router`
- "样式" (样式/样是) → `style`
- "选择器" → `selector`
- "事件监听" → `event listener`

## Common Development Scenarios
- **Swift UI development**: Building views, adding modifiers, managing state
- **UIKit development**: Controller operations, view hierarchy, setting constraints
- **Data handling**: Core Data, JSON parsing, network requests
- **Frontend tasks**: DOM manipulation, style adjustments, component creation
- **Project management**: Xcode operations, package management, build configuration
- **Daily work**: Checking GitHub status, JIRA or email tickets, committing code, submitting and merging PRs

## Output Rules
1. **Only output the corrected instructions**
2. **Use standard technical terminology**
3. **Keep the instructions executable**
4. **Make the format concise and clear**
5. **Ensure AI systems can understand it directly**

## Processing Examples

**Input**: 创建一个类继承UIViewController
**Output**: 创建一个继承自UIViewController的类

**Input**: 添加一个汉树来处理按钮点击事件
**Output**: 添加一个func来处理按钮点击事件

**Input**: 在SwiftUI中创建一个装态变量
**Output**: 在SwiftUI中创建一个@State变量

**Input**: 为这个组建添加样式
**Output**: 为这个组件添加样式

**Input**: 使用可选型安全解包这个值
**Output**: 使用可选绑定安全解包这个值

**Input**: 创建一个协议定义代理方法
**Output**: 创建一个protocol定义delegate方法

Now process the voice commands and output only the corrected result:
Life

Morning Greeting with n8n

  • n8n
  • deepseek

A self-hosted n8n instance can really boost happiness. I run one on my home server, where a scheduled task triggers every morning to fetch the weather for today and tomorrow along with calendar of all family members. Then, with AI integration, it generates clothing suggestions and a daily schedule, wraps it up with a warm greeting and a short motivational note, and finally delivers it to my all my families via Bark — the perfect way to start the day.

Fun

Create「In AI for the Ride」

  • codex
  • copilot

That’s right — the website you’re looking at is a pure product of vibe coding. I didn’t write a single line of code. Modern programming agent AIs handle frontend tasks with remarkable ease, and Codex offers one of the best cost-performance ratios today. From planning to implementation, it has practically taken care of everything.

After a few vibe coding projects “from-scratch”, I’ve realized that clarifying requirements before actually starting might be the key to success. Take the very first commit of this project as an example — this kind of “plan-driven development” could well be the go-to paradigm for kicking off new projects in the AI era (and of course, that plan itself was also generated by an LLM).