Skip to content

AI

In the examples below, > indicates a command entered inside the AI application itself.

Cherry Studio: LLM Desktop Client

Cherry Studio is a desktop client that supports multiple large language model providers.

Download Client | Cherry Studio

shell
paru -S cherry-studio-bin

Chatbox

Chatbox AI is an AI client and assistant that supports a wide range of advanced AI models and APIs on Windows, macOS, Android, iOS, Linux, and the web.

Chatbox AI

shell
paru -S chatbox-bin

CC Switch

CC Switch provides a desktop app for managing all five CLI tools in one place. Instead of editing config files manually, you get a visual interface with one-click vendor import, one-click switching between vendors, 50+ built-in vendor presets, unified MCP and SKILLS management, and instant system tray switching. Everything is backed by SQLite and atomic writes to avoid corrupting your configuration.

Releases · farion1231/cc-switch

shell
paru -S cc-switch-bin

After launching the app, add a provider under the target CLI, enter the API base URL and key, add the model, then switch providers. All CLI tools except Claude Code need to be restarted before the changes take effect.

If you want to adjust the reasoning effort for OpenAI models while adding a provider, edit the JSON config. Supported values are off, minimal, low, medium, high, and xhigh:

json
{
  "models": {
    "gpt-5.4": {
      "name": "GPT-5.4",
      "options": {
        "reasoning": {
          "effort": "xhigh"
        }
      }
    },
    "gpt-5.3-codex": {
      "name": "GPT-5.3 Codex",
      "options": {
        "reasoning": {
          "effort": "xhigh"
        }
      }
    }
  },
  "npm": "@ai-sdk/openai-compatible",
  "options": {
    "apiKey": "sk-xxx",
    "baseURL": "https://example.com/v1"
  }
}

OpenCode

OpenCode is an open-source AI coding agent. It provides a terminal interface, desktop app, and IDE extensions.

OpenCode | Download

shell
# install with script
curl -fsSL https://opencode.ai/install | bash
# install with Node
npm i -g opencode-ai
# install from AUR
paru -S opencode

# create the config directory
mkdir -p ~/.config/opencode

# enable LSP
echo 'export OPENCODE_EXPERIMENTAL_LSP_TOOL=true' >> ~/.zshrc
echo 'export OPENCODE_ENABLE_EXA=1' >> ~/.zshrc
source ~/.zshrc

To configure a third-party API, edit ~/.config/opencode/opencode.json:

json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "openai": {
      "options": {
        "baseURL": "https://example.com/v1",
        "apiKey": "sk-xxx"
      },
      "models": {
        "gpt-5.4": {
          "name": "GPT-5.4",
          "limit": {
            "context": 1050000,
            "output": 128000
          },
          "options": {
            "store": false,
            "reasoningEffort": "medium",
            "textVerbosity": "low",
            "reasoningSummary": "auto"
          },
          "variants": {
            "low": {
              "reasoningEffort": "low",
              "textVerbosity": "low"
            },
            "medium": {
              "reasoningEffort": "medium",
              "textVerbosity": "low"
            },
            "high": {
              "reasoningEffort": "high",
              "textVerbosity": "low"
            },
            "xhigh": {
              "reasoningEffort": "xhigh",
              "textVerbosity": "low"
            }
          }
        },
        "gpt-5.3-codex": {
          "name": "GPT-5.3 Codex",
          "limit": {
            "context": 400000,
            "output": 128000
          },
          "options": {
            "store": false,
            "reasoningEffort": "medium",
            "textVerbosity": "low",
            "reasoningSummary": "auto"
          },
          "variants": {
            "low": {
              "reasoningEffort": "low",
              "textVerbosity": "low"
            },
            "medium": {
              "reasoningEffort": "medium",
              "textVerbosity": "low"
            },
            "high": {
              "reasoningEffort": "high",
              "textVerbosity": "low"
            },
            "xhigh": {
              "reasoningEffort": "xhigh",
              "textVerbosity": "low"
            }
          }
        }
      }
    }
  },
  "model": "openai/gpt-5.4",
  "small_model": "openai/gpt-5.3-codex",
  "permission": {
    "*": "allow",
    "external_directory": {
      "*": "ask"
    },
    "doom_loop": "ask",
    "bash": {
      "*": "allow",
      "git push*": "ask",
      "git commit*": "ask",
      "rm*": "ask",
      "sudo*": "ask"
    },
    "edit": {
      "*": "allow"
    }
  },
  "agent": {
    "build": {
      "options": {
        "store": false
      }
    },
    "plan": {
      "options": {
        "store": false
      }
    }
  }
}
shell
# open a new terminal
$ opencode

# select the model
> /models

Claude Code

Collaborate with Claude directly inside your codebase. You can build, debug, and ship from the terminal, IDE, Slack, or the web. Describe what you need, and let Claude handle the rest.

Download Claude | Claude by Anthropic

shell
# install with script
curl -fsSL https://claude.ai/install.sh | bash
# install with Node
npm i -g @anthropic-ai/claude-code
# install from AUR
paru -S claude-code

Configure a third-party API:

shell
# open a new terminal

# set a third-party base URL and key temporarily
$ export ANTHROPIC_BASE_URL="https://example.com/v1"
$ export ANTHROPIC_AUTH_TOKEN="sk-xxx"

# set a third-party base URL and key permanently
$ echo 'export ANTHROPIC_BASE_URL="https://example.com/v1"' >> ~/.zshrc
$ echo 'export ANTHROPIC_AUTH_TOKEN="sk-xxx"' >> ~/.zshrc
$ source ~/.zshrc

$ claude

Security guide

 1. Yes, I trust this folder
  2. No, exit

Codex CLI

Codex CLI is a coding assistant released by OpenAI.

CLI - Codex | OpenAI Developers

shell
$ npm i -g @openai/codex

Configure a third-party API:

shell
$ mkdir -p ~/.codex

# configure the third-party key
$ nano ~/.codex/auth.json

{
  "OPENAI_API_KEY": "sk-xxx"
}

# configure the third-party base URL and related settings
$ nano ~/.codex/config.toml

model_provider = "OpenAI"
model = "gpt-5.4"
review_model = "gpt-5.4"
model_reasoning_effort = "xhigh"
disable_response_storage = true
network_access = "enabled"
windows_wsl_setup_acknowledged = true
model_context_window = 1000000
model_auto_compact_token_limit = 900000

[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://example.com/v1"
wire_api = "responses"
requires_openai_auth = true

# run
$ codex

Cline CLI

Cline CLI runs an AI coding agent directly in your terminal. You can pipe git diff into it for automated code review in CI/CD, run multiple instances for parallel development, or integrate it into your existing shell workflows.

Install - Cline Docs

shell
$ npm i -g cline

Configure a third-party API:

shell
$ cline

How would you like to get started?
> Use your own API key

Select a provider
> OpenAI Compatible

OpenAI Compatible API Key
> sk-xxx

Model ID
> gpt-5.4

Base URL (optional)
> https://example.com/v1

rtk: Filter and Compact LLM Context

rtk filters and compacts command output before it reaches the LLM context. It is a single Rust binary with zero dependencies and typically adds less than 10 ms of overhead.

rtk - Make your AI coding agent smarter | CLI proxy

shell
# install with script
curl -fsSL https://raw.githubusercontent.com/rtk-ai/rtk/refs/heads/master/install.sh | sh
# install from AUR
paru -S rtk

Commands: rtk/README_zh.md at master · rtk-ai/rtk

Initialize:

shell
# This installs a PreToolUse hook that transparently rewrites Bash commands
# into their rtk equivalents.
$ rtk init --global 

RTK hook installed/updated (global).

  Hook:      /home/duanluan/.claude/hooks/rtk-rewrite.sh
  RTK.md:    /home/duanluan/.claude/RTK.md (10 lines)
  CLAUDE.md: @RTK.md reference added

Patch existing /home/duanluan/.claude/settings.json? [y/N] 
y

  settings.json: hook added
  Restart Claude Code. Test with: git status
  filters:   /home/duanluan/.config/rtk/filters.toml (template, edit to add user-global filters)


  [info] Anonymous telemetry is enabled (opt-out: RTK_TELEMETRY_DISABLED=1)
  [info] See: https://github.com/rtk-ai/rtk#privacy--telemetry

Initialize for a specific tool:

shell
rtk init -g --opencode
rtk init -g --codex

rtk init -g --agent cursor
rtk init -g --agent windsurf
rtk init -g --agent cline

Show the current setup:

shell
rtk init --show 
rtk Configuration:

[ok] Hook: /home/njcm/.claude/hooks/rtk-rewrite.sh (thin delegator, version 3)
[ok] RTK.md: /home/njcm/.claude/RTK.md (slim mode)
[ok] Integrity: hook hash verified
[ok] Global (~/.claude/CLAUDE.md): @RTK.md reference
[--] Local (./CLAUDE.md): not found
[ok] settings.json: RTK hook configured
[ok] OpenCode: plugin installed (/home/njcm/.config/opencode/plugins/rtk.ts)
[--] Cursor hook: not found
[--] Cursor hooks.json: not found

Usage:
  rtk init              # Full injection into local CLAUDE.md
  rtk init -g           # Hook + RTK.md + @RTK.md + settings.json (recommended)
  rtk init -g --auto-patch    # Same as above but no prompt
  rtk init -g --no-patch      # Skip settings.json (manual setup)
  rtk init -g --uninstall     # Remove all RTK artifacts
  rtk init -g --claude-md     # Legacy: full injection into ~/.claude/CLAUDE.md
  rtk init -g --hook-only     # Hook only, no RTK.md
  rtk init --codex            # Configure local AGENTS.md + RTK.md
  rtk init -g --codex         # Configure ~/.codex/AGENTS.md + ~/.codex/RTK.md
  rtk init -g --opencode      # OpenCode plugin only
  rtk init -g --agent cursor  # Install Cursor Agent hooks

Inspect savings:

shell
rtk gain                        # summary
rtk gain --graph                # ASCII graph (30 days)
rtk discover                    # find missed savings opportunities

If it does not work with Codex or Claude Code, adjust ~/.codex/AGENTS.md or ~/.claude/CLAUDE.md:

shell
$ nano ~/.codex/AGENTS.md

NO_BUILTINS. 
SIMPLE_CMD: rtk <cmd>
COMPLEX_PIPELINE: rtk proxy sh -c "<cmd>"
@/home/xxx/.codex/RTK.md

Cockpit Tools

An AI IDE account manager that currently supports Antigravity, Codex, GitHub Copilot, Windsurf, Kiro, Cursor, Gemini CLI, CodeBuddy, CodeBuddy CN, Qoder, Trae, and Zed. It also supports multi-account, multi-instance parallel runs.

jlcodes99/cockpit-tools - GitHub

shell
paru -S cockpit-tools-bin

Cursor

Cursor is built to make you dramatically more productive and is one of the strongest ways to code with AI.

Cursor · Download

shell
paru -S cursor-bin

Windsurf

Windsurf is an intuitive AI coding tool designed to keep you and your team productive.

Download Windsurf Editor and Plugins | Windsurf

shell
paru -S windsurf

Antigravity

Google Antigravity AI IDE is an agent-first development environment that combines code editing, terminal access, and browser-level automation, allowing AI to participate directly in writing, debugging, and validating software.

Google Antigravity Download

shell
paru -S antigravity

Kiro

Kiro provides a structured framework for AI coding through specification-driven development.

Downloads - Kiro

shell
paru -S kiro-ide

Trae

TRAE (/treɪ/) deeply integrates AI capabilities and acts like an AI software engineer that can understand requirements, use tools, and complete development tasks independently.

Download | TRAE - Collaborate with Intelligence

shell
paru -S trae-bin

Qoder

Qoder is an agentic programming platform built for real software projects.

Download | Qoder

shell
paru -S qoder-bin