[{"content":"Scan all Git repositories in this folder and bring me up to date. Phase 1: Inspect only - Report repo name, current branch, working tree status, uncommitted changes, local branches, tracking status, ahead/behind status, stale branches, and branches whose upstream no longer exists. - Fetch remotes safely and identify stale remote-tracking refs that can be pruned. - Do not modify anything yet. Phase 2: Summarize - Summarize per repo: - uncommitted work needing a decision - safe cleanup candidates - recommended actions Rules - Do not discard code, stash, reset, delete branches, or push without my approval. - Do not touch protected branches: main, master, develop, dev, release, production. - Never delete the current branch. After I approve: - prune stale remote-tracking refs - delete only safe local branches that are already merged and no longer have an upstream - refresh main branches using fetch + fast-forward only ","permalink":"https://kristoffer.dev/tools/prompts/weekly-reset/","summary":"\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-text\" data-lang=\"text\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eScan all Git repositories in this folder and bring me up to date.\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ePhase 1: Inspect only\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- Report repo name, current branch, working tree status, uncommitted changes, local branches, tracking status, ahead/behind status, stale branches, and branches whose upstream no longer exists.\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- Fetch remotes safely and identify stale remote-tracking refs that can be pruned.\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- Do not modify anything yet.\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ePhase 2: Summarize\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- Summarize per repo:\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  - uncommitted work needing a decision\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  - safe cleanup candidates\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  - recommended actions\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eRules\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- Do not discard code, stash, reset, delete branches, or push without my approval.\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- Do not touch protected branches: main, master, develop, dev, release, production.\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- Never delete the current branch.\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eAfter I approve:\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- prune stale remote-tracking refs\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- delete only safe local branches that are already merged and no longer have an upstream\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e- refresh main branches using fetch + fast-forward only\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e","title":"Weekly reset"},{"content":"You found it.\nNot many do.\nThis page is for people who are curious enough to look past the surface — who find themselves typing commands into a terminal widget on a personal blog at an odd hour, just to see what happens.\nThat kind of curiosity is the best kind.\nThings I actually believe On software: The goal is not to write code. The goal is to eliminate the need for code. Every line you write is a liability you have to carry. Write less. Delete more.\nOn complexity: Most systems fail not because they were too simple, but because they became too complicated to understand. Simplicity is the hardest skill to develop — and the most undervalued.\nOn craft: There is a difference between code that works and code that communicates. The former is a product. The latter is a craft. Both matter. But when you have to choose, choose the one a future-you can read at 2am without wanting to cry.\nOn learning: The thing you are embarrassed not to know is the thing most worth learning next. Lean into the discomfort.\nOn building things: Ship something ugly. Then make it less ugly. Repeat. Perfection is a destination that keeps moving. Progress is a path you actually walk.\nSome lines that stuck \u0026ldquo;The purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise.\u0026rdquo; — Edsger W. Dijkstra\n\u0026ldquo;Any fool can write code that a computer can understand. Good programmers write code that humans can understand.\u0026rdquo; — Martin Fowler\n\u0026ldquo;We are what we repeatedly do. Excellence, then, is not an act, but a habit.\u0026rdquo; — Aristotle\n\u0026ldquo;Programs must be written for people to read, and only incidentally for machines to execute.\u0026rdquo; — Abelson \u0026amp; Sussman\nWhy this exists Because some things are worth hiding just a little. Not to be exclusive — but to reward the people who go looking.\nIf you got here through the terminal hack challenge: well done. If you used the Konami code: old school, respect. If you just happened to find the URL by guessing: you\u0026rsquo;ve got the right instincts.\nWelcome. Stay as long as you like.\n— Kristoffer\n","permalink":"https://kristoffer.dev/hidden/vault/","summary":"\u003cp\u003eYou found it.\u003c/p\u003e\n\u003cp\u003eNot many do.\u003c/p\u003e\n\u003chr\u003e\n\u003cp\u003eThis page is for people who are curious enough to look past the surface — who find themselves typing commands into a terminal widget on a personal blog at an odd hour, just to see what happens.\u003c/p\u003e\n\u003cp\u003eThat kind of curiosity is the best kind.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"things-i-actually-believe\"\u003eThings I actually believe\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eOn software:\u003c/strong\u003e\nThe goal is not to write code. The goal is to eliminate the need for code. Every line you write is a liability you have to carry. Write less. Delete more.\u003c/p\u003e","title":"The Vault"},{"content":" A practical, opinionated guide to running an autonomous AI assistant on a dedicated home lab server — with proper isolation, network control, and custom tooling.\nGoals You want a dedicated, always-on Linux machine that runs an AI agent (OpenClaw) with the following properties:\nIsolated execution environment — The agent runs under a dedicated Linux user (agent-openclaw) with strict permission boundaries. It cannot escalate to root, cannot access other users\u0026rsquo; data, and operates within well-defined filesystem and process boundaries.\nCustom CLI tool integration — Your own GoLang tools (Sky, PowerCtl) are available to the agent, authenticated via a mix of environment variables and Bitwarden CLI-fetched secrets, injected at runtime without persisting tokens on disk in plaintext.\nNetwork-level control via Tailscale — The agent machine joins your tailnet but can only reach specific services (n8n, Bitwarden, Obsidian vault shares) and specific external API domains. All other network traffic is denied by default. You have full visibility and control over what the agent can access.\nHardened OS — The Ubuntu Server 24.04 LTS installation follows security best practices: minimal attack surface, automatic security updates, audit logging, and strict firewall rules.\nOpenClaw as the runtime — The agent uses OpenClaw\u0026rsquo;s Gateway, channels, and skills platform. It runs as a systemd user service, restarts on failure, and exposes its Gateway only over Tailscale.\nSteps Overview Base Ubuntu Server Installation \u0026amp; Hardening User Accounts \u0026amp; Isolation Model Tailscale Network Control Secrets Management Installing Custom CLI Tools Installing OpenClaw systemd Service Configuration AppArmor Profile (Optional but Recommended) Operational Checklist Part 1: Base Ubuntu Server Installation \u0026amp; Hardening 1.1 — Minimal Installation Start with a Ubuntu Server 24.04 LTS minimal installation. During install:\nChoose \u0026ldquo;Ubuntu Server (minimized)\u0026rdquo; if offered Set up LVM with encryption (LUKS) for the root partition — this protects data at rest if the physical machine is stolen Create a single admin user (e.g., kristoffer) — this is your human operator account, not the agent Enable OpenSSH server during install After first boot, update everything:\nsudo apt update \u0026amp;\u0026amp; sudo apt upgrade -y sudo apt install -y unattended-upgrades apt-listchanges 1.2 — Enable Automatic Security Updates Configure unattended-upgrades to automatically apply security patches:\nsudo dpkg-reconfigure -plow unattended-upgrades Verify the configuration:\ncat /etc/apt/apt.conf.d/20auto-upgrades Should contain:\nAPT::Periodic::Update-Package-Lists \u0026#34;1\u0026#34;; APT::Periodic::Unattended-Upgrade \u0026#34;1\u0026#34;; Edit /etc/apt/apt.conf.d/50unattended-upgrades to enable automatic reboots if needed (for kernel updates):\nUnattended-Upgrade::Automatic-Reboot \u0026#34;true\u0026#34;; Unattended-Upgrade::Automatic-Reboot-Time \u0026#34;04:00\u0026#34;; 1.3 — SSH Hardening Edit /etc/ssh/sshd_config:\nsudo tee /etc/ssh/sshd_config.d/hardening.conf \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; # Disable password auth — keys only PasswordAuthentication no ChallengeResponseAuthentication no # Disable root login PermitRootLogin no # Limit to your admin user only AllowUsers kristoffer # Reduce attack surface X11Forwarding no MaxAuthTries 3 LoginGraceTime 30 ClientAliveInterval 300 ClientAliveCountMax 2 EOF sudo systemctl restart sshd Make sure your SSH key is in ~kristoffer/.ssh/authorized_keys before disabling password auth.\n1.4 — Firewall with UFW Set up a strict firewall. The agent machine should only be reachable via Tailscale, not on the LAN directly:\nsudo ufw default deny incoming sudo ufw default deny outgoing # Allow SSH from LAN as fallback (restrict to your subnet) sudo ufw allow from 192.168.1.0/24 to any port 22 proto tcp # Allow Tailscale interface (all traffic over tailnet is controlled by ACLs) sudo ufw allow in on tailscale0 sudo ufw allow out on tailscale0 # Allow DNS resolution sudo ufw allow out 53/udp sudo ufw allow out 53/tcp # Allow HTTPS out (needed for API calls, package management) sudo ufw allow out 443/tcp # Allow HTTP out (some package repos) sudo ufw allow out 80/tcp # Allow Tailscale\u0026#39;s own UDP traffic (WireGuard) sudo ufw allow out 41641/udp sudo ufw enable Note: We allow outgoing HTTPS/HTTP here at the OS level but will use Tailscale ACLs to control which hosts the agent can actually reach. UFW is the coarse filter; Tailscale ACLs are the fine-grained one.\n1.5 — Install Essential Packages sudo apt install -y \\ build-essential \\ curl \\ git \\ jq \\ tmux \\ htop \\ auditd \\ fail2ban \\ apparmor \\ apparmor-utils \\ acl \\ rsync 1.6 — Enable Audit Logging The audit daemon records system calls and can track what the agent user does:\nsudo systemctl enable auditd sudo systemctl start auditd # Add rules to monitor the agent user\u0026#39;s activities sudo tee /etc/audit/rules.d/agent-openclaw.rules \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; # Monitor all commands executed by the agent user (UID will be set after user creation) # We\u0026#39;ll update this after creating the agent user -a always,exit -F arch=b64 -S execve -F uid=2001 -k agent-exec -a always,exit -F arch=b64 -S openat -F uid=2001 -F dir=/etc -k agent-etc-access -a always,exit -F arch=b64 -S openat -F uid=2001 -F dir=/home/kristoffer -k agent-home-snoop EOF 1.7 — Configure fail2ban sudo systemctl enable fail2ban sudo systemctl start fail2ban Default config protects SSH. Good enough for a home lab.\nPart 2: User Accounts \u0026amp; Isolation Model The core principle: the agent runs as a dedicated, unprivileged Linux user with no sudo access and carefully controlled filesystem permissions.\n2.1 — Create the Agent User # Create a system group for agent users sudo groupadd --gid 2000 agents # Create the dedicated agent user sudo useradd \\ --uid 2001 \\ --gid agents \\ --create-home \\ --home-dir /home/agent-openclaw \\ --shell /bin/bash \\ --comment \u0026#34;OpenClaw AI Agent\u0026#34; \\ agent-openclaw # Lock the password — no direct login, only su/sudo from admin sudo passwd -l agent-openclaw Now update the audit rules with the correct UID:\nsudo augenrules --load 2.2 — Directory Structure Create a clean workspace layout for the agent:\n# Agent home structure sudo -u agent-openclaw mkdir -p /home/agent-openclaw/{.config,.local/bin,workspace,tools,secrets} # The .openclaw directory (OpenClaw\u0026#39;s data) sudo -u agent-openclaw mkdir -p /home/agent-openclaw/.openclaw/{workspace/skills,credentials} # Lock down permissions — agent owns their home, nobody else reads it sudo chmod 750 /home/agent-openclaw sudo chmod 700 /home/agent-openclaw/secrets 2.3 — Restrict Agent Filesystem Access Use filesystem permissions and ACLs to ensure the agent user cannot wander:\n# Agent cannot read other users\u0026#39; homes sudo chmod 750 /home/kristoffer # Agent cannot write to system directories (already the case by default, but verify) # Agent cannot access /root sudo chmod 700 /root # Create a shared directory for files you want the agent to access sudo mkdir -p /srv/agent-shared sudo chown kristoffer:agents /srv/agent-shared sudo chmod 2770 /srv/agent-shared # setgid so new files inherit group 2.4 — Restrict Process Visibility Prevent the agent user from seeing other users\u0026rsquo; processes:\n# Mount /proc with hidepid echo \u0026#34;proc /proc proc defaults,hidepid=2,gid=$(getent group agents | cut -d: -f3) 0 0\u0026#34; | \\ sudo tee -a /etc/fstab # Apply now sudo mount -o remount,hidepid=2,gid=2000 /proc With hidepid=2, the agent-openclaw user can only see its own processes, not yours or other services'.\n2.5 — Resource Limits with systemd and cgroups Prevent the agent from consuming all system resources. We\u0026rsquo;ll configure this later when we set up the systemd service, but the key limits will be:\nMemory: Capped at e.g. 4GB (adjust for your hardware) CPU: Limited to specific cores or percentage Max open files: Reasonable limit No ability to create new user namespaces (prevents container escapes) Part 3: Tailscale Network Control This is where you get surgical control over what the agent machine can talk to.\n3.1 — Install Tailscale curl -fsSL https://tailscale.com/install.sh | sh # Start tailscale and authenticate sudo tailscale up --ssh 3.2 — Tag the Agent Machine In your Tailscale admin console (https://login.tailscale.com/admin/acls), you\u0026rsquo;ll tag this machine. Tags are the foundation of ACL control.\nFirst, register the tag in your ACL policy:\n{ \u0026#34;tagOwners\u0026#34;: { \u0026#34;tag:agent\u0026#34;: [\u0026#34;autogroup:admin\u0026#34;], \u0026#34;tag:homelab\u0026#34;: [\u0026#34;autogroup:admin\u0026#34;], \u0026#34;tag:services\u0026#34;: [\u0026#34;autogroup:admin\u0026#34;], }, } Apply the tag to the agent machine:\nsudo tailscale up --advertise-tags=tag:agent Tag your other machines appropriately:\nYour n8n server → tag:services Your Bitwarden/Vaultwarden instance → tag:services Your NAS/Obsidian file server → tag:homelab Your personal machines → no tag (they\u0026rsquo;re your user identity) 3.3 — Tailscale ACL Policy This is the core of your network control. The ACL policy defines exactly what the agent machine can reach:\n{ \u0026#34;tagOwners\u0026#34;: { \u0026#34;tag:agent\u0026#34;: [\u0026#34;autogroup:admin\u0026#34;], \u0026#34;tag:homelab\u0026#34;: [\u0026#34;autogroup:admin\u0026#34;], \u0026#34;tag:services\u0026#34;: [\u0026#34;autogroup:admin\u0026#34;], }, \u0026#34;acls\u0026#34;: [ // Your personal machines can reach everything { \u0026#34;action\u0026#34;: \u0026#34;accept\u0026#34;, \u0026#34;src\u0026#34;: [\u0026#34;autogroup:member\u0026#34;], \u0026#34;dst\u0026#34;: [\u0026#34;*:*\u0026#34;], }, // Agent can reach specific services on your tailnet { \u0026#34;action\u0026#34;: \u0026#34;accept\u0026#34;, \u0026#34;src\u0026#34;: [\u0026#34;tag:agent\u0026#34;], \u0026#34;dst\u0026#34;: [ \u0026#34;tag:services:8080\u0026#34;, // n8n webhook port \u0026#34;tag:services:443\u0026#34;, // Vaultwarden/Bitwarden HTTPS \u0026#34;tag:homelab:445\u0026#34;, // SMB for Obsidian vault \u0026#34;tag:homelab:22\u0026#34;, // SSH/rsync for file access ], }, // Agent can reach the OpenClaw Gateway on itself (loopback via tailscale) { \u0026#34;action\u0026#34;: \u0026#34;accept\u0026#34;, \u0026#34;src\u0026#34;: [\u0026#34;tag:agent\u0026#34;], \u0026#34;dst\u0026#34;: [\u0026#34;tag:agent:18789\u0026#34;], }, // Services can talk back to agent (for webhooks from n8n, etc.) { \u0026#34;action\u0026#34;: \u0026#34;accept\u0026#34;, \u0026#34;src\u0026#34;: [\u0026#34;tag:services\u0026#34;], \u0026#34;dst\u0026#34;: [\u0026#34;tag:agent:18789\u0026#34;], }, ], // DNS configuration — control what domains the agent can resolve \u0026#34;dns\u0026#34;: { \u0026#34;nameservers\u0026#34;: [\u0026#34;100.100.100.100\u0026#34;], // Tailscale\u0026#39;s MagicDNS \u0026#34;domains\u0026#34;: [\u0026#34;your-tailnet.ts.net\u0026#34;], \u0026#34;magicDNS\u0026#34;: true, }, } 3.4 — Controlling Internet Access (Exit Node Pattern) Tailscale ACLs control tailnet traffic, but what about the agent reaching external APIs (Anthropic, OpenAI, GitHub, etc.)? There are two approaches:\nOption A: Allow outbound HTTPS at the OS level (simpler)\nYou already have UFW allowing outbound 443. The agent process can reach external APIs directly. This is the simplest approach and works well if you trust the agent not to exfiltrate data to random domains.\nOption B: Proxy through an exit node with DNS filtering (more control)\nSet up a machine on your tailnet as an exit node with DNS-level filtering (e.g., Pi-hole or NextDNS):\n# On your DNS/proxy machine sudo tailscale up --advertise-exit-node --advertise-tags=tag:services # On the agent machine — route all traffic through the exit node sudo tailscale up --exit-node=\u0026lt;exit-node-ip\u0026gt; --advertise-tags=tag:agent Then configure Pi-hole/NextDNS to only allow specific domains:\napi.anthropic.com api.openai.com github.com registry.npmjs.org pypi.org Your custom API domains This guide assumes Option A as the starting point. You can layer on Option B later.\n3.5 — Mount Obsidian Vault via Tailscale Since the agent needs access to your Obsidian vault on another machine, set up an SMB or NFS mount over Tailscale:\n# Install CIFS utilities sudo apt install -y cifs-utils # Create mount point sudo mkdir -p /mnt/obsidian-vault # Create a credentials file (readable only by root) sudo tee /etc/samba/.obsidian-creds \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; username=your-smb-user password=your-smb-password EOF sudo chmod 600 /etc/samba/.obsidian-creds # Add fstab entry — mounts over Tailscale IP, accessible by agent user echo \u0026#34;//your-nas.your-tailnet.ts.net/obsidian /mnt/obsidian-vault cifs credentials=/etc/samba/.obsidian-creds,uid=agent-openclaw,gid=agents,file_mode=0640,dir_mode=0750,nofail 0 0\u0026#34; | \\ sudo tee -a /etc/fstab sudo mount -a Now the agent can read the vault at /mnt/obsidian-vault but the mount credentials are stored securely.\nPart 4: Secrets Management Your GoLang tools (Sky, PowerCtl) use a mix of environment variables and Bitwarden CLI for authentication. Here\u0026rsquo;s how to set this up securely for the agent.\n4.1 — Install Bitwarden CLI # Install as the agent user sudo -u agent-openclaw bash \u0026lt;\u0026lt; \u0026#39;SETUP\u0026#39; cd /home/agent-openclaw curl -fsSL \u0026#34;https://vault.bitwarden.com/download/?app=cli\u0026amp;platform=linux\u0026#34; -o bw.zip unzip bw.zip -d .local/bin/ rm bw.zip chmod +x .local/bin/bw SETUP 4.2 — Bitwarden Session Management The agent should never store the Bitwarden master password on disk. Instead, use a systemd-managed session that unlocks at service start via a pre-stored API key or session token.\nCreate a wrapper script that the agent\u0026rsquo;s tools will use to fetch secrets:\nsudo -u agent-openclaw tee /home/agent-openclaw/tools/bw-get-secret.sh \u0026lt;\u0026lt; \u0026#39;SCRIPT\u0026#39; #!/usr/bin/env bash # Fetch a secret from Bitwarden by item name and field # Usage: bw-get-secret.sh \u0026#34;item-name\u0026#34; [field-name] # If field-name is omitted, returns the password field set -euo pipefail ITEM_NAME=\u0026#34;$1\u0026#34; FIELD_NAME=\u0026#34;${2:-password}\u0026#34; if [ -z \u0026#34;${BW_SESSION:-}\u0026#34; ]; then echo \u0026#34;ERROR: BW_SESSION not set. Bitwarden is not unlocked.\u0026#34; \u0026gt;\u0026amp;2 exit 1 fi if [ \u0026#34;$FIELD_NAME\u0026#34; = \u0026#34;password\u0026#34; ]; then bw get password \u0026#34;$ITEM_NAME\u0026#34; --session \u0026#34;$BW_SESSION\u0026#34; elif [ \u0026#34;$FIELD_NAME\u0026#34; = \u0026#34;username\u0026#34; ]; then bw get username \u0026#34;$ITEM_NAME\u0026#34; --session \u0026#34;$BW_SESSION\u0026#34; else bw get item \u0026#34;$ITEM_NAME\u0026#34; --session \u0026#34;$BW_SESSION\u0026#34; | jq -r \u0026#34;.fields[] | select(.name==\\\u0026#34;$FIELD_NAME\\\u0026#34;) | .value\u0026#34; fi SCRIPT sudo chmod 750 /home/agent-openclaw/tools/bw-get-secret.sh 4.3 — Environment File for Static Secrets For environment variables that don\u0026rsquo;t change often, use a systemd EnvironmentFile that is readable only by root and the agent:\n# Create the directory first sudo mkdir -p /etc/openclaw # Then create the environment file sudo tee /etc/openclaw/agent.env \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; # OpenClaw configuration ANTHROPIC_API_KEY=sk-ant-xxxxx OPENCLAW_GATEWAY_PORT=18789 # Static tool tokens (rotate periodically) SKY_API_ENDPOINT=https://your-api.example.com POWERCTL_CONFIG_PATH=/home/agent-openclaw/.config/powerctl/config.yaml # Bitwarden API key for automatic unlock BW_CLIENTID=user.xxxxx BW_CLIENTSECRET=xxxxx EOF # Set secure permissions sudo chmod 600 /etc/openclaw/agent.env sudo chown root:agents /etc/openclaw/agent.env sudo chmod 640 /etc/openclaw/agent.env 4.4 — Runtime Secret Injection Script Create a startup script that unlocks Bitwarden and populates dynamic secrets before OpenClaw starts:\nsudo -u agent-openclaw tee /home/agent-openclaw/tools/agent-startup.sh \u0026lt;\u0026lt; \u0026#39;SCRIPT\u0026#39; #!/usr/bin/env bash # Runs before OpenClaw Gateway starts # Unlocks Bitwarden and exports dynamic secrets set -euo pipefail export PATH=\u0026#34;/home/agent-openclaw/.local/bin:$PATH\u0026#34; # Login to Bitwarden using API key (set via EnvironmentFile) if [ -n \u0026#34;${BW_CLIENTID:-}\u0026#34; ] \u0026amp;\u0026amp; [ -n \u0026#34;${BW_CLIENTSECRET:-}\u0026#34; ]; then bw login --apikey 2\u0026gt;/dev/null || true export BW_SESSION=$(bw unlock --passwordenv BW_MASTER_PASSWORD --raw 2\u0026gt;/dev/null || echo \u0026#34;\u0026#34;) if [ -z \u0026#34;$BW_SESSION\u0026#34; ]; then echo \u0026#34;WARNING: Could not unlock Bitwarden vault. Dynamic secrets unavailable.\u0026#34; \u0026gt;\u0026amp;2 else echo \u0026#34;Bitwarden vault unlocked successfully.\u0026#34; # Fetch dynamic secrets and export them export SKY_ACCESS_TOKEN=$(bw get password \u0026#34;Sky CLI Token\u0026#34; --session \u0026#34;$BW_SESSION\u0026#34; 2\u0026gt;/dev/null || echo \u0026#34;\u0026#34;) export POWERCTL_TOKEN=$(bw get password \u0026#34;PowerCtl API Token\u0026#34; --session \u0026#34;$BW_SESSION\u0026#34; 2\u0026gt;/dev/null || echo \u0026#34;\u0026#34;) fi fi # Execute the actual command (OpenClaw gateway) exec \u0026#34;$@\u0026#34; SCRIPT sudo chmod 750 /home/agent-openclaw/tools/agent-startup.sh Part 5: Installing Custom CLI Tools 5.1 — Install Your GoLang Tools Since Sky and PowerCtl are your own GoLang binaries, install them into the agent\u0026rsquo;s local bin:\n# Copy pre-built binaries (adjust paths to where you build/distribute them) sudo cp /path/to/sky /home/agent-openclaw/.local/bin/sky sudo cp /path/to/powerctl /home/agent-openclaw/.local/bin/powerctl sudo chown agent-openclaw:agents /home/agent-openclaw/.local/bin/{sky,powerctl} sudo chmod 750 /home/agent-openclaw/.local/bin/{sky,powerctl} Alternatively, if you want the agent machine to pull them from a Git repo or your own release server:\n# If you publish releases on GitHub sudo -u agent-openclaw bash \u0026lt;\u0026lt; \u0026#39;INSTALL\u0026#39; cd /home/agent-openclaw/.local/bin curl -fsSL https://github.com/your-org/sky/releases/latest/download/sky-linux-amd64 -o sky curl -fsSL https://github.com/your-org/powerctl/releases/latest/download/powerctl-linux-amd64 -o powerctl chmod +x sky powerctl INSTALL 5.2 — Configure Tool Access for OpenClaw OpenClaw needs to know about these tools. Add them to the agent\u0026rsquo;s workspace:\nsudo -u agent-openclaw tee /home/agent-openclaw/.openclaw/workspace/TOOLS.md \u0026lt;\u0026lt; \u0026#39;TOOLSDOC\u0026#39; # Available CLI Tools ## Sky Location: `~/.local/bin/sky` Purpose: [Your description of what Sky does] Usage: `sky [command] [flags]` Authentication: Uses $SKY_ACCESS_TOKEN environment variable. ## PowerCtl Location: `~/.local/bin/powerctl` Purpose: [Your description of what PowerCtl does] Usage: `powerctl [command] [flags]` Authentication: Uses $POWERCTL_TOKEN environment variable and config at $POWERCTL_CONFIG_PATH. ## Bitwarden CLI Location: `~/.local/bin/bw` Purpose: Fetch secrets from your Bitwarden vault. Usage: `bw get password \u0026#34;item-name\u0026#34;` (requires $BW_SESSION to be set) Note: Session is established at agent startup. Use `bw-get-secret.sh` wrapper for convenience. TOOLSDOC 5.3 — PATH Configuration Ensure the agent\u0026rsquo;s PATH includes the local bin:\nsudo -u agent-openclaw tee -a /home/agent-openclaw/.bashrc \u0026lt;\u0026lt; \u0026#39;BASHRC\u0026#39; # Agent tool paths export PATH=\u0026#34;$HOME/.local/bin:$PATH\u0026#34; # Node.js (installed in Part 6) export PATH=\u0026#34;$HOME/.local/share/nvm/versions/node/v22/bin:$PATH\u0026#34; BASHRC Part 6: Installing OpenClaw 6.1 — Install Node.js 22 OpenClaw requires Node.js \u0026gt;= 22:\n# Install nvm as the agent user sudo -u agent-openclaw bash \u0026lt;\u0026lt; \u0026#39;NVMSETUP\u0026#39; curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash export NVM_DIR=\u0026#34;$HOME/.nvm\u0026#34; source \u0026#34;$NVM_DIR/nvm.sh\u0026#34; nvm install 22 nvm use 22 nvm alias default 22 # Install pnpm globally npm install -g pnpm NVMSETUP 6.2 — Install OpenClaw sudo -u agent-openclaw bash \u0026lt;\u0026lt; \u0026#39;CLAWSETUP\u0026#39; export NVM_DIR=\u0026#34;$HOME/.nvm\u0026#34; source \u0026#34;$NVM_DIR/nvm.sh\u0026#34; # Install OpenClaw globally npm install -g openclaw@latest # Or from source for more control: # git clone https://github.com/openclaw/openclaw.git ~/openclaw-src # cd ~/openclaw-src # pnpm install # pnpm ui:build # pnpm build CLAWSETUP 6.3 — Configure OpenClaw Create the minimal configuration:\nsudo -u agent-openclaw mkdir -p /home/agent-openclaw/.openclaw sudo -u agent-openclaw tee /home/agent-openclaw/.openclaw/openclaw.json \u0026lt;\u0026lt; \u0026#39;CONFIG\u0026#39; { \u0026#34;agent\u0026#34;: { \u0026#34;model\u0026#34;: \u0026#34;anthropic/claude-opus-4-5\u0026#34; }, \u0026#34;gateway\u0026#34;: { \u0026#34;port\u0026#34;: 18789, \u0026#34;bind\u0026#34;: \u0026#34;loopback\u0026#34;, \u0026#34;tailscale\u0026#34;: { \u0026#34;mode\u0026#34;: \u0026#34;serve\u0026#34; }, \u0026#34;auth\u0026#34;: { \u0026#34;mode\u0026#34;: \u0026#34;password\u0026#34;, \u0026#34;password\u0026#34;: \u0026#34;CHANGE_ME_TO_SOMETHING_STRONG\u0026#34; } }, \u0026#34;channels\u0026#34;: { \u0026#34;telegram\u0026#34;: { \u0026#34;botToken\u0026#34;: \u0026#34;YOUR_TELEGRAM_BOT_TOKEN\u0026#34; } }, \u0026#34;browser\u0026#34;: { \u0026#34;enabled\u0026#34;: false } } CONFIG Key decisions here:\nbind: \u0026quot;loopback\u0026quot; — Gateway only listens on 127.0.0.1. External access is via Tailscale Serve only. tailscale.mode: \u0026quot;serve\u0026quot; — Exposes the Gateway dashboard over your tailnet with HTTPS, but not to the public internet. auth.mode: \u0026quot;password\u0026quot; — Adds a password requirement even for tailnet access. browser.enabled: false — Disabled by default on a headless server. Enable if you install a headless Chrome later. 6.4 — Run the Onboarding Wizard sudo -iu agent-openclaw openclaw onboard Follow the wizard to set up your preferred channels (Telegram, Discord, etc.), identity, and workspace.\nPart 7: systemd Service Configuration 7.1 — Create the Service Unit sudo tee /etc/systemd/system/openclaw-agent.service \u0026lt;\u0026lt; \u0026#39;UNIT\u0026#39; [Unit] Description=OpenClaw AI Agent Gateway After=network-online.target tailscaled.service Wants=network-online.target StartLimitIntervalSec=300 StartLimitBurst=5 [Service] Type=simple User=agent-openclaw Group=agents # Environment files EnvironmentFile=/etc/openclaw/agent.env # Use the startup script to inject dynamic secrets ExecStart=/home/agent-openclaw/tools/agent-startup.sh \\ /home/agent-openclaw/.nvm/versions/node/v22/bin/node \\ /home/agent-openclaw/.nvm/versions/node/v22/bin/openclaw \\ gateway --port 18789 --verbose # Restart policy Restart=on-failure RestartSec=10 # Working directory WorkingDirectory=/home/agent-openclaw # Resource limits (adjust for your hardware) MemoryMax=4G MemoryHigh=3G CPUQuota=200% TasksMax=256 LimitNOFILE=8192 # Security hardening NoNewPrivileges=true ProtectSystem=strict ProtectHome=tmpfs BindPaths=/home/agent-openclaw BindReadOnlyPaths=/mnt/obsidian-vault PrivateTmp=true ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true RestrictSUIDSGID=true RestrictNamespaces=true RestrictRealtime=true LockPersonality=true SystemCallArchitectures=native # Logging StandardOutput=journal StandardError=journal SyslogIdentifier=openclaw-agent [Install] WantedBy=multi-user.target UNIT This is a heavily hardened service configuration. Key security properties:\nNoNewPrivileges=true — The agent process cannot gain additional privileges ProtectSystem=strict — The entire filesystem is read-only except explicitly allowed paths ProtectHome=tmpfs — All home directories are hidden, then we bind-mount only the agent\u0026rsquo;s home BindPaths — Only the agent\u0026rsquo;s home is writable BindReadOnlyPaths — Obsidian vault is mounted read-only RestrictNamespaces=true — Cannot create user/network namespaces (prevents container escape patterns) MemoryMax=4G — Hard memory cap 7.2 — Enable and Start sudo systemctl daemon-reload sudo systemctl enable openclaw-agent.service sudo systemctl start openclaw-agent.service # Check status sudo systemctl status openclaw-agent.service sudo journalctl -u openclaw-agent.service -f 7.3 — Log Monitoring Set up structured log access:\n# View agent logs sudo journalctl -u openclaw-agent.service --since \u0026#34;1 hour ago\u0026#34; # View audit logs for the agent user sudo ausearch -ua 2001 --start today # Follow live sudo journalctl -u openclaw-agent.service -f Part 8: AppArmor Profile (Optional but Recommended) For an additional layer of mandatory access control, create an AppArmor profile for the OpenClaw process:\nsudo tee /etc/apparmor.d/openclaw-agent \u0026lt;\u0026lt; \u0026#39;APPARMOR\u0026#39; #include \u0026lt;tunables/global\u0026gt; profile openclaw-agent /home/agent-openclaw/.nvm/versions/node/*/bin/node { #include \u0026lt;abstractions/base\u0026gt; #include \u0026lt;abstractions/nameservice\u0026gt; # Node.js and OpenClaw /home/agent-openclaw/.nvm/** rix, /home/agent-openclaw/.local/bin/** rix, # Agent workspace (read/write) /home/agent-openclaw/ r, /home/agent-openclaw/** rw, /home/agent-openclaw/.openclaw/** rw, # Obsidian vault (read-only) /mnt/obsidian-vault/ r, /mnt/obsidian-vault/** r, # Shared directory /srv/agent-shared/** rw, # Deny sensitive paths deny /etc/shadow r, deny /etc/passwd w, deny /home/kristoffer/** rw, deny /root/** rw, # Network network inet stream, network inet dgram, network inet6 stream, network inet6 dgram, # Temp /tmp/** rw, /var/tmp/** rw, } APPARMOR sudo apparmor_parser -r /etc/apparmor.d/openclaw-agent Part 9: Operational Checklist Verification Steps After completing the setup, verify each layer:\n# 1. User isolation — agent cannot read your home sudo -u agent-openclaw ls /home/kristoffer # Should: Permission denied # 2. Process isolation — agent cannot see your processes sudo -u agent-openclaw ps aux # Should: Only see agent-openclaw\u0026#39;s own processes # 3. Tailscale connectivity — agent can reach allowed services sudo -u agent-openclaw curl -s https://your-n8n.your-tailnet.ts.net/healthz # Should: 200 OK # 4. Tailscale isolation — agent cannot reach random tailnet machines sudo -u agent-openclaw curl -s https://your-desktop.your-tailnet.ts.net:22 # Should: Timeout/refused (if not in ACL) # 5. Tools work — Sky and PowerCtl are available sudo -u agent-openclaw /home/agent-openclaw/.local/bin/sky --version sudo -u agent-openclaw /home/agent-openclaw/.local/bin/powerctl --version # 6. OpenClaw is running sudo systemctl status openclaw-agent.service curl -s http://127.0.0.1:18789/health # 7. Audit trail is recording sudo ausearch -ua 2001 --start today -i | head -20 Maintenance Tasks Task Frequency Command OS security updates Daily (auto) unattended-upgrades handles this OpenClaw updates Weekly/as needed sudo -iu agent-openclaw npm update -g openclaw Rotate API keys Monthly Update /etc/openclaw/agent.env and Bitwarden Review audit logs Weekly sudo ausearch -ua 2001 --start recent -i Check Tailscale ACLs After changes Tailscale admin console Backup agent workspace Weekly rsync to your NAS Update Sky/PowerCtl As released Replace binaries in .local/bin/ Backup the Agent State # Cron job to backup the agent\u0026#39;s workspace and config sudo tee /etc/cron.d/openclaw-backup \u0026lt;\u0026lt; \u0026#39;CRON\u0026#39; 0 3 * * * root rsync -az --delete /home/agent-openclaw/.openclaw/ /srv/backups/openclaw/ 2\u0026gt;\u0026amp;1 | logger -t openclaw-backup CRON Architecture Summary ┌─────────────────────────────────────────────────────────────────┐ │ Your Tailnet (WireGuard) │ │ │ │ ┌──────────────┐ ┌──────────────┐ ┌───────────────────────┐ │ │ │ Your Desktop │ │ NAS/Files │ │ Services (n8n, BW) │ │ │ │ (personal) │ │ tag:homelab │ │ tag:services │ │ │ └──────┬───────┘ └──────┬───────┘ └───────────┬───────────┘ │ │ │ │ │ │ │ │ Tailscale ACLs control all traffic │ │ │ │ │ │ │ │ ┌──────┴─────────────────┴───────────────────────┴──────────┐ │ │ │ Agent Machine (tag:agent) │ │ │ │ Ubuntu 24.04 LTS — Hardened │ │ │ │ │ │ │ │ ┌──────────────────────────────────────────────────────┐ │ │ │ │ │ user: agent-openclaw (uid 2001) │ │ │ │ │ │ │ │ │ │ │ │ ┌─────────────┐ ┌────────┐ ┌──────────────────┐ │ │ │ │ │ │ │ OpenClaw │ │ Sky │ │ PowerCtl │ │ │ │ │ │ │ │ Gateway │ │ CLI │ │ CLI │ │ │ │ │ │ │ │ :18789 │ │ │ │ │ │ │ │ │ │ │ └──────┬───────┘ └───┬────┘ └────────┬─────────┘ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └──────────────┴─────────────────┘ │ │ │ │ │ │ │ │ │ │ │ │ │ Secrets: env vars + Bitwarden CLI runtime fetch │ │ │ │ │ │ Filesystem: home + /mnt/obsidian-vault (ro) │ │ │ │ │ │ Resources: 4GB RAM, 200% CPU, 256 tasks max │ │ │ │ │ └──────────────────────────────────────────────────────┘ │ │ │ │ │ │ │ │ Security layers: │ │ │ │ ✓ UFW firewall (coarse) │ │ │ │ ✓ Tailscale ACLs (fine-grained) │ │ │ │ ✓ systemd sandboxing (ProtectSystem, NoNewPrivileges) │ │ │ │ ✓ AppArmor MAC profile │ │ │ │ ✓ hidepid=2 (process isolation) │ │ │ │ ✓ auditd (full command logging) │ │ │ │ ✓ LUKS disk encryption │ │ │ └────────────────────────────────────────────────────────────┘ │ │ │ │ External APIs (Anthropic, OpenAI, etc.) │ │ ← Allowed via UFW outbound HTTPS │ │ ← Optional: DNS filtering via exit node │ └─────────────────────────────────────────────────────────────────┘ What\u0026rsquo;s Next Once this foundation is running, you can extend it:\nAdd more agent users — Create agent-codex, agent-research, etc., each with their own UID, Tailscale tag, and ACL rules Enable OpenClaw browser control — Install headless Chromium and set browser.enabled: true for web automation Set up OpenClaw skills — Build custom skills that wrap your Sky and PowerCtl tools with natural language interfaces Add n8n webhooks — Use OpenClaw\u0026rsquo;s webhook surface to receive triggers from your n8n automations DNS filtering — Deploy a Pi-hole exit node for granular domain-level control over the agent\u0026rsquo;s internet access Prometheus/Grafana monitoring — Monitor the agent\u0026rsquo;s resource usage, API costs, and activity patterns The key insight of this architecture is defense in depth: no single layer is responsible for security. The agent is constrained by Linux user permissions, systemd sandboxing, AppArmor mandatory access control, Tailscale network ACLs, and OS-level firewalling — all working together. If any one layer has a gap, the others compensate.\n","permalink":"https://kristoffer.dev/blog/hardened-linux-environment-for-openclaw-ai-agent/","summary":"\u003cblockquote\u003e\n\u003cp\u003eA practical, opinionated guide to running an autonomous AI assistant on a dedicated home lab server — with proper isolation, network control, and custom tooling.\u003c/p\u003e\u003c/blockquote\u003e\n\u003ch2 id=\"goals\"\u003eGoals\u003c/h2\u003e\n\u003cp\u003eYou want a dedicated, always-on Linux machine that runs an AI agent (OpenClaw) with the following properties:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\n\u003cp\u003e\u003cstrong\u003eIsolated execution environment\u003c/strong\u003e — The agent runs under a dedicated Linux user (\u003ccode\u003eagent-openclaw\u003c/code\u003e) with strict permission boundaries. It cannot escalate to root, cannot access other users\u0026rsquo; data, and operates within well-defined filesystem and process boundaries.\u003c/p\u003e","title":"Building a Hardened Linux Environment for Your OpenClaw AI Agent"},{"content":"If you\u0026rsquo;ve built a command-line tool and want to make it easy for users to install, Homebrew is one of the best distribution methods available. In this guide, I\u0026rsquo;ll walk you through creating your own Homebrew tap—a custom repository that allows users to install your software with a simple brew install command.\nSteps Overview Prepare Your Binary Releases Create Your Tap Repository Write Your Formula Get SHA256 Checksums Push Your Formula Test Your Tap Share Your Tap What is Homebrew? Homebrew is the most popular package manager for macOS, and it also works on Linux. It allows users to install software from the command line with simple commands like brew install wget. Think of it as an app store for command-line tools and applications.\nUnderstanding Homebrew Vocabulary Before we dive in, let\u0026rsquo;s clarify some key terms:\nFormula: A Ruby script that tells Homebrew how to download, build, and install a piece of software. It\u0026rsquo;s essentially a recipe for installing your application.\nTap: A third-party repository of formulas. While Homebrew has a main repository (homebrew-core) with thousands of formulas, taps allow anyone to create their own custom repository. The name comes from the idea of \u0026ldquo;tapping a keg\u0026rdquo;—you\u0026rsquo;re adding a new source of software.\nBottle: A pre-compiled binary version of software. Instead of compiling from source, users can download a ready-to-use binary, making installation much faster.\nCask: A formula specifically for installing GUI applications (like browsers or editors) rather than command-line tools.\nCellar: The directory where Homebrew installs software (/usr/local/Cellar on Intel Macs, /opt/homebrew/Cellar on Apple Silicon).\nWhy Create a Tap? Creating your own tap is perfect when:\nYou want to distribute your own CLI tools or applications Your software isn\u0026rsquo;t popular enough (yet!) for homebrew-core You need faster iteration without going through the homebrew-core review process You want to maintain multiple versions or experimental releases Prerequisites Before starting, make sure you have:\nA GitHub account A CLI tool or application you want to distribute Binary releases of your application (we\u0026rsquo;ll cover this) Basic familiarity with Git and GitHub Step 1: Prepare Your Binary Releases Homebrew formulas need downloadable binaries. The most common approach is to use GitHub Releases.\nUsing GoReleaser (for Go projects) If your project is written in Go, GoReleaser makes this incredibly easy. Add a .goreleaser.yml file:\nbuilds: - env: - CGO_ENABLED=0 goos: - linux - darwin goarch: - amd64 - arm64 archives: - format: tar.gz name_template: \u0026gt;- {{ .ProjectName }}_{{ .Version }}_ {{- title .Os }}_ {{- if eq .Arch \u0026#34;amd64\u0026#34; }}x86_64 {{- else }}{{ .Arch }}{{ end }} checksum: name_template: \u0026#34;checksums.txt\u0026#34; Then create a release:\ngit tag -a v1.0.0 -m \u0026#34;First release\u0026#34; git push origin v1.0.0 goreleaser release --clean Manual Binary Releases If you\u0026rsquo;re not using Go or GoReleaser, you\u0026rsquo;ll need to:\nBuild binaries for different platforms (macOS Intel, macOS ARM, Linux x86_64, Linux ARM64) Package them as .tar.gz archives Upload them to a GitHub release Generate SHA256 checksums Example for generating checksums:\nshasum -a 256 your-app-darwin-arm64.tar.gz shasum -a 256 your-app-darwin-amd64.tar.gz shasum -a 256 your-app-linux-amd64.tar.gz Step 2: Create Your Tap Repository Your tap must be a GitHub repository with a specific naming convention:\nCreate a new repository named homebrew-\u0026lt;your-tap-name\u0026gt;\nFor example: homebrew-tools or homebrew-myapp The homebrew- prefix is required Create the directory structure:\nmkdir -p Formula That\u0026rsquo;s it! Homebrew only needs the Formula directory.\nStep 3: Write Your Formula Create a file at Formula/\u0026lt;your-app\u0026gt;.rb. Here\u0026rsquo;s a complete example for a CLI tool called \u0026ldquo;sky\u0026rdquo;:\nclass Sky \u0026lt; Formula desc \u0026#34;A powerful CLI tool for cloud management\u0026#34; homepage \u0026#34;https://github.com/yourusername/sky-cli\u0026#34; version \u0026#34;1.0.0\u0026#34; on_macos do if Hardware::CPU.arm? url \u0026#34;https://github.com/yourusername/sky-cli/releases/download/v1.0.0/sky_1.0.0_Darwin_arm64.tar.gz\u0026#34; sha256 \u0026#34;abc123...your-arm64-sha256-here\u0026#34; else url \u0026#34;https://github.com/yourusername/sky-cli/releases/download/v1.0.0/sky_1.0.0_Darwin_x86_64.tar.gz\u0026#34; sha256 \u0026#34;def456...your-x86_64-sha256-here\u0026#34; end end on_linux do if Hardware::CPU.arm? url \u0026#34;https://github.com/yourusername/sky-cli/releases/download/v1.0.0/sky_1.0.0_Linux_arm64.tar.gz\u0026#34; sha256 \u0026#34;ghi789...your-linux-arm64-sha256-here\u0026#34; else url \u0026#34;https://github.com/yourusername/sky-cli/releases/download/v1.0.0/sky_1.0.0_Linux_x86_64.tar.gz\u0026#34; sha256 \u0026#34;jkl012...your-linux-x86_64-sha256-here\u0026#34; end end def install bin.install \u0026#34;sky\u0026#34; end test do system \u0026#34;#{bin}/sky\u0026#34;, \u0026#34;--version\u0026#34; end end Understanding the Formula Let\u0026rsquo;s break down each section:\nclass Sky \u0026lt; Formula: The class name must match your filename (sky.rb → Sky class), capitalized desc: A short, one-line description of your tool homepage: Your project\u0026rsquo;s website or GitHub repository version: The version number on_macos / on_linux: Platform-specific configurations Hardware::CPU.arm?: Detects ARM architecture (M1/M2 Macs, ARM Linux) url: The download URL for your binary sha256: The checksum to verify download integrity install: Instructions for installing your binary (typically just copying to the bin directory) test: A simple test to verify the installation works Step 4: Get SHA256 Checksums You need SHA256 checksums for each binary. The easiest way:\n# Download and checksum in one command curl -sL https://github.com/yourusername/app/releases/download/v1.0.0/app_Darwin_arm64.tar.gz | shasum -a 256 # Or if you have the file locally shasum -a 256 app_Darwin_arm64.tar.gz Copy these checksums into your formula where indicated.\nStep 5: Push Your Formula git add Formula/your-app.rb git commit -m \u0026#34;Add formula for your-app v1.0.0\u0026#34; git push origin main Step 6: Test Your Tap Now you can test your tap locally:\n# Add your tap brew tap yourusername/your-tap-name # Install your application brew install your-app # Test it works your-app --version Step 7: Share Your Tap Users can now install your application with just two commands:\nbrew tap yourusername/your-tap-name brew install your-app Or in a single command:\nbrew install yourusername/your-tap-name/your-app Updating Your Formula When you release a new version:\nCreate a new GitHub release with updated binaries Get the new SHA256 checksums Update your formula with new version, URLs, and checksums Commit and push Users can then update with:\nbrew update brew upgrade your-app Advanced Tips Auto-update with Dependabot Add .github/dependabot.yml to your tap repository:\nversion: 2 updates: - package-ecosystem: \u0026#34;github-actions\u0026#34; directory: \u0026#34;/\u0026#34; schedule: interval: \u0026#34;weekly\u0026#34; Adding Dependencies If your app requires other software:\ndepends_on \u0026#34;python@3.11\u0026#34; depends_on \u0026#34;git\u0026#34; Multiple Binaries If your release contains multiple executables:\ndef install bin.install \u0026#34;main-app\u0026#34; bin.install \u0026#34;helper-tool\u0026#34; bin.install \u0026#34;utility\u0026#34; end Configuration Files Install configuration or documentation files:\ndef install bin.install \u0026#34;your-app\u0026#34; etc.install \u0026#34;config.yml\u0026#34; =\u0026gt; \u0026#34;your-app/config.yml\u0026#34; doc.install \u0026#34;README.md\u0026#34; end Troubleshooting Common Issues \u0026ldquo;Formula not found\u0026rdquo; error Make sure your repository name starts with homebrew- and your formula is in the Formula/ directory.\nSHA256 mismatch If you get a checksum error, recalculate your SHA256:\ncurl -sL YOUR_URL | shasum -a 256 Cached formula not updating Force Homebrew to refresh:\nbrew untap yourusername/your-tap brew tap yourusername/your-tap Or clear the cache:\nrm -rf $(brew --repository)/Library/Taps/yourusername/homebrew-your-tap brew tap yourusername/your-tap Downloads failing with 404 Verify your release assets are publicly accessible. Go to your GitHub release page and try downloading the files manually. If your repository is private, the assets won\u0026rsquo;t be accessible via Homebrew.\nBest Practices Test before publishing: Always test your formula locally before pushing Use semantic versioning: Follow semver (1.0.0, 1.1.0, 2.0.0) Keep formulas simple: Don\u0026rsquo;t add unnecessary complexity Document installation: Add installation instructions to your project\u0026rsquo;s README Maintain checksums: Always verify SHA256 checksums match your binaries Support multiple platforms: Provide binaries for both Intel and ARM architectures Add a test block: Include a simple test to verify installation Next Steps Now that you have your tap set up, consider:\nAdding your tap to awesome lists and directories Writing documentation for your users Automating formula updates with CI/CD Eventually submitting to homebrew-core once your project gains traction Have questions or run into issues? Check out the official Homebrew documentation.\n","permalink":"https://kristoffer.dev/blog/guide-to-creating-your-first-homebrew-tap/","summary":"\u003cp\u003eIf you\u0026rsquo;ve built a command-line tool and want to make it easy for users to install, Homebrew is one of the best distribution methods available. In this guide, I\u0026rsquo;ll walk you through creating your own Homebrew tap—a custom repository that allows users to install your software with a simple \u003ccode\u003ebrew install\u003c/code\u003e command.\u003c/p\u003e\n\u003ch2 id=\"steps-overview\"\u003eSteps Overview\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003e\u003ca href=\"#step-1-prepare-your-binary-releases\"\u003ePrepare Your Binary Releases\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"#step-2-create-your-tap-repository\"\u003eCreate Your Tap Repository\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"#step-3-write-your-formula\"\u003eWrite Your Formula\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"#step-4-get-sha256-checksums\"\u003eGet SHA256 Checksums\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"#step-5-push-your-formula\"\u003ePush Your Formula\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"#step-6-test-your-tap\"\u003eTest Your Tap\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"#step-7-share-your-tap\"\u003eShare Your Tap\u003c/a\u003e\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"what-is-homebrew\"\u003eWhat is Homebrew?\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eHomebrew\u003c/strong\u003e is the most popular package manager for macOS, and it also works on Linux. It allows users to install software from the command line with simple commands like \u003ccode\u003ebrew install wget\u003c/code\u003e. Think of it as an app store for command-line tools and applications.\u003c/p\u003e","title":"Creating Your First Homebrew Tap: A Complete Guide"},{"content":"💥 Spark: Events as Internal APIs At NDC Oslo, one quote from James Eastham really stuck with me during his talk:\n\u0026ldquo;Think about your events as internal APIs.\u0026rdquo;\nThis simple mindset shift has deep implications for how we design, evolve, and document event-driven systems.\nJust like internal APIs, events deserve:\nClear contracts — schemas, versioning, and well-defined intent Ownership — someone needs to own the lifecycle and semantics Documentation — make it obvious what the event means and when it\u0026rsquo;s emitted Stability — no breaking changes without coordination This spark reminded me that reliability in event-driven systems isn’t just about infra or retries — it’s about treating your events as first-class citizens in your architecture.\n🔗 Talk: So You Want to Maintain a Reliable Event Driven System — James Eastham\nThis is part of a series of Sparks — short takeaways and memorable insights from conferences, talks, and daily learnings.\n","permalink":"https://kristoffer.dev/blog/sparks-from-ndc-events-as-apis/","summary":"\u003ch2 id=\"-spark-events-as-internal-apis\"\u003e💥 Spark: Events as Internal APIs\u003c/h2\u003e\n\u003cp\u003eAt NDC Oslo, one quote from \u003ca href=\"https://ndcoslo.com/speakers/james-eastham\"\u003eJames Eastham\u003c/a\u003e really stuck with me during his talk:\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u003cstrong\u003e\u0026ldquo;Think about your events as internal APIs.\u0026rdquo;\u003c/strong\u003e\u003c/p\u003e\u003c/blockquote\u003e\n\u003cp\u003eThis simple mindset shift has deep implications for how we design, evolve, and document event-driven systems.\u003c/p\u003e\n\u003cp\u003eJust like internal APIs, events deserve:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eClear contracts\u003c/strong\u003e — schemas, versioning, and well-defined intent\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eOwnership\u003c/strong\u003e — someone needs to own the lifecycle and semantics\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDocumentation\u003c/strong\u003e — make it obvious what the event means and when it\u0026rsquo;s emitted\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eStability\u003c/strong\u003e — no breaking changes without coordination\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThis spark reminded me that \u003cem\u003ereliability in event-driven systems isn’t just about infra or retries — it’s about treating your events as first-class citizens\u003c/em\u003e in your architecture.\u003c/p\u003e","title":"Spark from NDC Oslo: Events as Internal APIs"},{"content":"The Power of Ten - Rules for Developing Safety Critical Code The \u0026ldquo;Power of Ten\u0026rdquo; is a set of coding rules designed by NASA\u0026rsquo;s Gerard J. Holzmann to improve the safety, reliability, and maintainability of safety-critical software. The document outlines ten essential rules for writing robust, verifiable code in critical systems, with a focus on simplicity, strict control flow, memory management, and code clarity.\nThese rules are especially relevant when the cost of failure is high, like in space missions or medical devices. The emphasis on tool-based compliance checks and practical coding practices make these guidelines highly valuable for developers working on critical software.\nFor an engaging breakdown and reaction to the rules, check out ThePrimeagen\u0026rsquo;s video, where he reads and analyzes the document. Whether you prefer to dive straight into the document or watch a developer\u0026rsquo;s perspective on it, both links offer insightful and informative content.\nHighlights:\nTen essential rules for safe and reliable coding. Focus on simplicity, defensive programming, and code clarity. Insights from NASA\u0026rsquo;s approach to safety-critical systems. Video reaction and breakdown by ThePrimeagen. Explore the document here (PDF)\nWatch ThePrimeagen\u0026rsquo;s reaction video\n","permalink":"https://kristoffer.dev/blog/power-of-ten/","summary":"\u003ch2 id=\"the-power-of-ten---rules-for-developing-safety-critical-code\"\u003eThe Power of Ten - Rules for Developing Safety Critical Code\u003c/h2\u003e\n\u003cp\u003eThe \u0026ldquo;Power of Ten\u0026rdquo; is a set of coding rules designed by NASA\u0026rsquo;s Gerard J. Holzmann to improve the safety, reliability,\nand maintainability of safety-critical software. The document outlines ten essential rules for writing robust,\nverifiable code in critical systems, with a focus on simplicity, strict control flow, memory management, and code clarity.\u003c/p\u003e","title":"Link Sparks: The Power of Ten - Rules for Developing Safety Critical Code"},{"content":"Syncing Obsidian vaults with Git works great until machine-specific files create conflicts or you accidentally commit sensitive notes. This guide gives you two battle-tested .gitignore configurations and a safety checklist to avoid common pitfalls.\nQuick take: Use the Strict setup to eliminate config conflicts entirely. Add .gitattributes for cleaner diffs. Keep your repo private unless you\u0026rsquo;re building a public knowledge base.\nRecommended .gitignore A) Minimal (Keep plugin settings, drop noise) Keeps plugin settings and config across devices. Excludes cache and workspace files.\n# OS and editor clutter .DS_Store ._* Thumbs.db ehthumbs.db Icon? .Spotlight-V100 .Trashes *.swp *.swo *.tmp # Obsidian .trash/ .obsidian/cache/ .obsidian/workspace* .obsidian/updates.json # Common plugin caches .obsidian/plugins/*/cache/ .obsidian/plugins/*/.cache/ .obsidian/plugins/*/node_modules/ B) Strict (Zero config conflicts) Excludes all .obsidian/ config. No merge conflicts, ever. Set up plugins once per device.\n# OS and editor clutter .DS_Store ._* Thumbs.db ehthumbs.db Icon? .Spotlight-V100 .Trashes *.swp *.swo *.tmp # Obsidian - exclude all configuration .obsidian/ .trash/ Recommended .gitattributes Add this to get better diffs and prevent line-ending issues:\n# Ensure consistent line endings * text=auto *.md text # Treat binary files properly *.png binary *.jpg binary *.jpeg binary *.gif binary *.pdf binary Safety Checklist Keep your repo private - Your notes likely contain personal information Review before first commit - Run git status and git diff to verify what you\u0026rsquo;re committing Scan for sensitive data - Check for API keys, passwords, or private details Test on a clone first - Clone to a temp folder and verify the workflow before syncing devices Write meaningful commits - \u0026ldquo;Updated notes\u0026rdquo; helps no one. \u0026ldquo;Added project planning docs\u0026rdquo; helps future you Basic Sync Workflow # Pull changes from other devices git pull # Stage and commit your changes git add . git commit -m \u0026#34;Update notes from [device name]\u0026#34; # Push to GitHub git push For automatic syncing, consider using Git plugins like Obsidian Git that can auto-commit and push at regular intervals.\n","permalink":"https://kristoffer.dev/blog/obsidian-gitignore/","summary":"\u003cp\u003eSyncing Obsidian vaults with Git works great until machine-specific files create conflicts or you accidentally commit sensitive notes. This guide gives you two battle-tested .gitignore configurations and a safety checklist to avoid common pitfalls.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eQuick take\u003c/strong\u003e: Use the Strict setup to eliminate config conflicts entirely. Add .gitattributes for cleaner diffs. Keep your repo private unless you\u0026rsquo;re building a public knowledge base.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"recommended-gitignore\"\u003eRecommended \u003ccode\u003e.gitignore\u003c/code\u003e\u003c/h2\u003e\n\u003ch3 id=\"a-minimal-keep-plugin-settings-drop-noise\"\u003eA) Minimal (Keep plugin settings, drop noise)\u003c/h3\u003e\n\u003cp\u003eKeeps plugin settings and config across devices. Excludes cache and workspace files.\u003c/p\u003e","title":"Obsidian + GitHub: Safe .gitignore Setup"},{"content":"Git submodules are a powerful feature that allows you to include and manage other repositories as part of your main repository. This is especially useful for projects where you want to use external repositories (e.g., a custom Neovim configuration) while keeping them independent. In this post, we\u0026rsquo;ll cover how to set up, update, and manage Git submodules with a practical example.\nWhat Are Git Submodules? A submodule in Git is a pointer to a specific commit of another repository. It allows you to include that repository as a part of your main project without merging its content directly into your repository.\nHow to Set Up a Git Submodule Add a Submodule to Your Repository\nUse the git submodule add command to include an external repository as a submodule:\ngit submodule add https://github.com/yourusername/nvim-config.git .config/nvim https://github.com/yourusername/nvim-config.git: The URL of the external repository. .config/nvim: The path where the submodule should live in your project. This command creates an entry in your .gitmodules file, which keeps track of all submodules.\nInitialize and Clone the Submodule\nAfter adding the submodule, you need to initialize and fetch its contents:\ngit submodule update --init --recursive Commit the Changes\nAdd the changes to your repository and commit them:\ngit add .gitmodules .config/nvim git commit -m \u0026#34;Added Neovim configuration as a submodule\u0026#34; How to Update a Git Submodule Submodules do not automatically track updates from the remote repository. If the submodule repository has new commits, follow these steps to update it:\nNavigate to the Submodule Directory\ncd .config/nvim Fetch and Checkout the Latest Changes\nPull the latest changes from the remote repository:\ngit fetch git checkout main git pull origin main Go Back to the Main Repository and Record the Update\nReturn to your main repository:\ncd ../../ git add .config/nvim git commit -m \u0026#34;Updated Neovim configuration submodule\u0026#34; Push the Changes\nPush the changes to your remote repository:\ngit push origin main Common Commands for Submodule Management Clone a Repository with Submodules:\nUse the --recurse-submodules flag when cloning:\ngit clone --recurse-submodules https://github.com/yourusername/dotfiles.git Update All Submodules:\nUpdate all submodules in one command:\ngit submodule update --remote --merge Remove a Submodule:\nTo remove a submodule, follow these steps:\ngit submodule deinit -f path/to/submodule rm -rf .git/modules/path/to/submodule git rm -f path/to/submodule Troubleshooting Tips Submodule Not Initialized?\nIf you encounter an empty submodule directory, run:\ngit submodule update --init --recursive Need to Reset a Submodule?\nTo reset a submodule to the state tracked by the main repository:\ngit submodule update --recursive --force Why Use Submodules? Modularity: Keep related projects separate but linked. Version Control: Pin specific versions of dependencies. Reusability: Share configurations (e.g., Neovim) across multiple projects. By following the steps above, you can easily set up and manage Git submodules in your projects. This approach is perfect for organizing modular repositories like dotfiles with external configurations.\n","permalink":"https://kristoffer.dev/blog/git_submodule_guide/","summary":"\u003cp\u003eGit submodules are a powerful feature that allows you to include and manage other repositories as part of your main repository. This is especially useful for projects where you want to use external repositories (e.g., a custom Neovim configuration) while keeping them independent. In this post, we\u0026rsquo;ll cover how to set up, update, and manage Git submodules with a practical example.\u003c/p\u003e\n\u003chr\u003e\n\u003ch3 id=\"what-are-git-submodules\"\u003eWhat Are Git Submodules?\u003c/h3\u003e\n\u003cp\u003eA submodule in Git is a pointer to a specific commit of another repository. It allows you to include that repository as a part of your main project without merging its content directly into your repository.\u003c/p\u003e","title":"Managing Git Submodules: A Quick Guide"},{"content":"Neovim is a modern take on Vim, designed for power users and developers. In this guide, we’ll install Neovim and set it up for basic usage.\nWhy Use Neovim? Extensibility: Lua-based configuration. Modern Features: Built-in LSP, tree-sitter support, etc. Installation Linux Use your package manager:\napt update \u0026amp;\u0026amp; sudo apt install neovim macOS Install using Homebrew:\nbrew install neovim Windows Install using choco or scoop:\nchoco install neovim Verify the Installation Run nvim in your terminal and create a basic configuration:\n-- File: ~/.config/nvim/init.lua print(\u0026#34;Welcome to Neovim!\u0026#34;) What’s Next? Ready to supercharge Neovim with plugins and custom configurations? Check out the next post: Configuring Neovim with My Dotfiles\n","permalink":"https://kristoffer.dev/blog/getting-started-with-neovim-installation-guide/","summary":"\u003cp\u003eNeovim is a modern take on Vim, designed for power users and developers. In this guide, we’ll install Neovim and set it up for basic usage.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"why-use-neovim\"\u003eWhy Use Neovim?\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eExtensibility\u003c/strong\u003e: Lua-based configuration.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eModern Features\u003c/strong\u003e: Built-in LSP, tree-sitter support, etc.\u003c/li\u003e\n\u003c/ul\u003e\n\u003chr\u003e\n\u003ch2 id=\"installation\"\u003eInstallation\u003c/h2\u003e\n\u003ch3 id=\"linux\"\u003eLinux\u003c/h3\u003e\n\u003cp\u003eUse your package manager:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eapt update \u003cspan class=\"o\"\u003e\u0026amp;\u0026amp;\u003c/span\u003e sudo apt install neovim\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003ch3 id=\"macos\"\u003emacOS\u003c/h3\u003e\n\u003cp\u003eInstall using Homebrew:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ebrew install neovim\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003ch3 id=\"windows\"\u003eWindows\u003c/h3\u003e\n\u003cp\u003eInstall using \u003ccode\u003echoco\u003c/code\u003e or \u003ccode\u003escoop\u003c/code\u003e:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-powershell\" data-lang=\"powershell\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"n\"\u003echoco\u003c/span\u003e \u003cspan class=\"n\"\u003einstall\u003c/span\u003e \u003cspan class=\"n\"\u003eneovim\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003chr\u003e\n\u003ch2 id=\"verify-the-installation\"\u003eVerify the Installation\u003c/h2\u003e\n\u003cp\u003eRun \u003ccode\u003envim\u003c/code\u003e in your terminal and create a basic configuration:\u003c/p\u003e","title":"Getting Started with Neovim: Installation Guide"},{"content":"Link Sparks: Essential Knowledge for Programmers\nA treasure trove of resources that every programmer should know. This curated list covers a wide range of essential topics, including algorithms, memory management, security, and best practices for writing clean and efficient code. Whether you\u0026rsquo;re just starting or looking to refine your skills, this guide has something valuable for everyone.\nExplore the list on GitHub\n","permalink":"https://kristoffer.dev/blog/link-sparks---essential-knowledge-for-programmers/","summary":"\u003cp\u003e\u003cstrong\u003eLink Sparks: Essential Knowledge for Programmers\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eA treasure trove of resources that every programmer should know. This curated list covers a wide range of essential topics, including algorithms, memory management, security, and best practices for writing clean and efficient code. Whether you\u0026rsquo;re just starting or looking to refine your skills, this guide has something valuable for everyone.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/mtdvio/every-programmer-should-know\"\u003eExplore the list on GitHub\u003c/a\u003e\u003c/p\u003e","title":"Link Sparks: Essential Knowledge for Programmers"},{"content":"Getting Started with Python Environments: Conda, venv, and Installing Dependencies Managing Python projects efficiently requires a solid understanding of virtual environments and dependency management. This guide introduces you to two popular tools—conda and venv—and shows you how to set up a project and install dependencies from a requirements.txt file.\nWhy Use Virtual Environments? Virtual environments allow you to:\nIsolate dependencies for each project. Avoid conflicts between package versions. Maintain a clean global Python installation. Both conda and Python\u0026rsquo;s built-in venv module are excellent tools for creating virtual environments.\nGetting Started with Conda What is Conda? Conda is a powerful package, dependency, and environment manager for Python and other languages. It’s available through Anaconda (a comprehensive suite) or Miniconda (lightweight).\nBasic Conda Commands Check if Conda is Installed\nconda --version Update Conda\nconda update conda Search for a Package\nconda search \u0026lt;package-name\u0026gt; Install a Package\nconda install \u0026lt;package-name\u0026gt; Remove a Package\nconda remove \u0026lt;package-name\u0026gt; Managing Conda Environments Create a New Environment\nconda create --name \u0026lt;env-name\u0026gt; python=\u0026lt;version\u0026gt; Example:\nconda create --name myenv python=3.9 Activate the Environment\nconda activate \u0026lt;env-name\u0026gt; Deactivate the Environment\nconda deactivate List All Environments\nconda env list Remove an Environment\nconda env remove --name \u0026lt;env-name\u0026gt; Export an Environment\nconda env export \u0026gt; environment.yml Recreate an Environment\nconda env create -f environment.yml Getting Started with venv The venv module is lightweight and comes pre-installed with Python (since Python 3.3).\nBasic venv Commands Create a Virtual Environment\npython -m venv \u0026lt;env-name\u0026gt; Activate the Virtual Environment\nLinux/Mac: source \u0026lt;env-name\u0026gt;/bin/activate Windows: \u0026lt;env-name\u0026gt;\\Scriptsctivate Deactivate the Virtual Environment\ndeactivate Delete a Virtual Environment Simply remove the directory:\nrm -rf \u0026lt;env-name\u0026gt; # Linux/Mac rmdir /S \u0026lt;env-name\u0026gt; # Windows Installing Packages from requirements.txt When you clone a new repository, it often includes a requirements.txt file listing the dependencies. Here’s how to install them.\nUsing venv Navigate to Your Repo Directory\ncd \u0026lt;path-to-your-repo\u0026gt; Create and Activate a Virtual Environment\npython -m venv venv source venv/bin/activate # Linux/Mac venv\\Scriptsctivate # Windows Install Dependencies\npip install -r requirements.txt Verify Installation\npip list Using Conda Navigate to Your Repo Directory\ncd \u0026lt;path-to-your-repo\u0026gt; Create and Activate a Conda Environment\nconda create --name \u0026lt;env-name\u0026gt; python=\u0026lt;version\u0026gt; conda activate \u0026lt;env-name\u0026gt; Install Dependencies\nUsing Conda: conda install --file requirements.txt Or using Pip (if requirements.txt lists PyPI packages): pip install -r requirements.txt Verify Installation\npip list Conda vs. venv: Which Should You Use? Feature Conda venv Supported languages Python, R, Ruby, JavaScript, etc. Python only Package management Yes No Environment isolation Yes Yes Cross-platform Yes Yes Recommendations Use Conda if you need:\nMulti-language support. Built-in package management. Precompiled scientific libraries. Use venv if you:\nOnly need Python. Want a lightweight, simple solution. Conclusion Understanding and using virtual environments effectively can significantly improve your Python development workflow. Whether you choose Conda for its powerful features or venv for its simplicity, these tools will help you manage your projects and dependencies like a pro.\nHappy coding! 🚀\n","permalink":"https://kristoffer.dev/blog/getting-started-with-python-environments/","summary":"\u003ch1 id=\"getting-started-with-python-environments-conda-venv-and-installing-dependencies\"\u003eGetting Started with Python Environments: Conda, venv, and Installing Dependencies\u003c/h1\u003e\n\u003cp\u003eManaging Python projects efficiently requires a solid understanding of virtual environments and dependency management. This guide introduces you to two popular tools—\u003ccode\u003econda\u003c/code\u003e and \u003ccode\u003evenv\u003c/code\u003e—and shows you how to set up a project and install dependencies from a \u003ccode\u003erequirements.txt\u003c/code\u003e file.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"why-use-virtual-environments\"\u003e\u003cstrong\u003eWhy Use Virtual Environments?\u003c/strong\u003e\u003c/h2\u003e\n\u003cp\u003eVirtual environments allow you to:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eIsolate dependencies for each project.\u003c/li\u003e\n\u003cli\u003eAvoid conflicts between package versions.\u003c/li\u003e\n\u003cli\u003eMaintain a clean global Python installation.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eBoth \u003ccode\u003econda\u003c/code\u003e and Python\u0026rsquo;s built-in \u003ccode\u003evenv\u003c/code\u003e module are excellent tools for creating virtual environments.\u003c/p\u003e","title":"Getting Started with Python Environments"},{"content":"Running out of disk space on your Ubuntu server can cause performance issues and prevent applications from running properly. This guide provides straightforward steps to help you check disk space usage and clean up your server effectively.\nCheck Current Disk Space Usage To see how much disk space is currently being used, open your terminal and run:\ndf -h df displays disk space usage. -h makes the output human-readable. This command shows disk usage for all mounted filesystems. Look at the Use% column to identify filesystems with high usage.\nIdentify Large Directories To find directories that are using a lot of space, use:\ndu -ah / | sort -rh | head -n 20 du -ah /: Displays the disk usage of all files and directories starting from the root, in human-readable format. sort -rh: Sorts the output by size, largest first. head -n 20: Shows the top 20 largest items. This will list the largest directories and files on your server.\nInspect Specific Directories If a particular directory is using a lot of space, inspect it further with:\ndu -ah /path/to/directory | sort -rh | head -n 20 Replace /path/to/directory with the actual path to the directory you want to check. This command lists the largest files and directories within the specified directory.\nRemove Unnecessary Packages and Clean Cache To free up space by removing unnecessary packages and cleaning the package cache, run:\nsudo apt-get autoremove sudo apt-get clean autoremove removes packages that are no longer needed. clean clears out the local repository of retrieved package files. Delete Old Log Files Log files can accumulate over time. To delete log files older than 30 days, use:\nsudo find /var/log -type f -name \u0026#34;*.log\u0026#34; -mtime +30 -exec rm -f {} \\; /var/log: Directory where log files are typically stored. -mtime +30: Finds files modified more than 30 days ago. -exec rm -f {}: Deletes the files found. Empty the Trash If files are in the Trash, they still occupy disk space. To empty the Trash for the current user, run:\nrm -rf ~/.local/share/Trash/* This command deletes all files in the Trash directory for the current user.\nRemove Large Files Manually For any large files you no longer need, delete them manually with:\nrm /path/to/large-file Replace /path/to/large-file with the path to the file you want to delete.\nCompress Files to Save Space To save space, compress files or directories you don’t need frequently:\ntar -czvf archive-name.tar.gz /path/to/directory tar -czvf: Compresses files into a .tar.gz archive. archive-name.tar.gz: Name of the output archive. /path/to/directory: Directory you want to compress. Automate Regular Cleanup To automate regular disk cleanup tasks, set up a cron job. Edit your crontab with:\ncrontab -e Add a line like the following to run a cleanup script every Sunday at 3 AM:\n0 3 * * 0 /path/to/your/cleanup-script.sh Replace /path/to/your/cleanup-script.sh with the path to your script.\nConclusion By following these steps, you can regularly monitor and clean up disk space on your Ubuntu servers, ensuring they run smoothly and efficiently. Keep this guide handy for quick reference whenever you need to free up space.\nSave this guide as a reference to maintain your server’s health and prevent disk space issues from impacting performance.\n","permalink":"https://kristoffer.dev/blog/how-to-check-and-clean-up-disk-space/","summary":"\u003cp\u003eRunning out of disk space on your Ubuntu server can cause performance issues and prevent applications from running properly. This guide provides straightforward steps to help you check disk space usage and clean up your server effectively.\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eCheck Current Disk Space Usage\u003c/strong\u003e\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eTo see how much disk space is currently being used, open your terminal and run:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003edf -h\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cul\u003e\n\u003cli\u003edf displays disk space usage.\u003c/li\u003e\n\u003cli\u003e-h makes the output human-readable.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThis command shows disk usage for all mounted filesystems. Look at the Use% column to identify filesystems with high usage.\u003c/p\u003e","title":"How to Check and Clean Up Disk Space on Linux Servers (Ubuntu)"},{"content":"If you\u0026rsquo;ve recently upgraded the disk size of a volume on Hetzner Cloud, you\u0026rsquo;ll find that the additional space is not automatically available for use. This is because the filesystem on the disk needs to be resized to utilize the new space. Here’s a straightforward guide on how to resize your filesystem on a Hetzner Cloud virtual machine (VM).\nStep 1: Verify the New Disk Size Before you start resizing the filesystem, it’s crucial to ensure that the operating system recognizes the new size of your disk. You can use the lsblk command to list all block devices and their sizes:\nlsblk Look for the disk that corresponds to your volume (usually /dev/sda or similar) and check that the size reflects the recent changes you made.\nStep 2: Resize the File System With the disk size confirmed, the next step is to resize the filesystem. This is done using the resize2fs command. It’s important to perform this operation when the disk is not in active use. Here’s how you can do it:\nsudo resize2fs /dev/[your-device] Replace [your-device] with the actual device name, such as /dev/sda1. This command effectively resizes the filesystem to occupy all available space on the disk.\nStep 3: Confirm the Operation To ensure that the filesystem has been successfully resized, you can use the df -h command to check the disk usage and space availability:\ndf -h /mnt/HC_Volume_1234 This will show you the total, used, and available space of your volume. It should reflect the new size.\nImportant Reminder Always ensure that any critical data is backed up before performing operations like resizing a filesystem. This precaution helps avoid data loss in case of any unexpected issues during the resizing process.\n","permalink":"https://kristoffer.dev/blog/resize-filesystem-hetzner/","summary":"\u003cp\u003eIf you\u0026rsquo;ve recently upgraded the disk size of a volume on Hetzner Cloud, you\u0026rsquo;ll find that the additional space is not automatically available for use. This is because the filesystem on the disk needs to be resized to utilize the new space. Here’s a straightforward guide on how to resize your filesystem on a Hetzner Cloud virtual machine (VM).\u003c/p\u003e","title":"How to Resize a Filesystem on Hetzner Cloud VM"},{"content":"Symlinks (symbolic links) are like the ultimate shortcuts in the world of operating systems. They help you point to files and folders from multiple locations without creating duplicates. Let\u0026rsquo;s dive into how you can create and manage symlinks, making your life easier whether you\u0026rsquo;re navigating the Linux terminal or the Windows Command Prompt.\nLinux: Creating and Managing Symlinks Making a Symlink In Linux, creating a symlink is as simple as striking a few keys:\nln -s /path/to/target /path/to/symlink Example: Want a shortcut to your project\u0026rsquo;s log file on your desktop?\nln -s /var/log/my_project.log ~/Desktop/project_log Identifying a Symlink Curious about where a symlink leads? ls -l spills the beans:\nls -l Saying Goodbye Remove a symlink without touching the original file:\nrm /path/to/symlink Windows: Symlinks in Action Windows might not have the same street cred as Linux in the command line world, but it\u0026rsquo;s got its own symlink game.\nCrafting a Symlink In Windows, the Command Prompt is your go-to for symlink creation:\nmklink Link Target File Example: Create a symlink to a file:\nmklink C:\\Users\\YourName\\Desktop\\project_log.txt C:\\path\\to\\original\\file.txt Directory Example: Or to a directory:\nmklink /D C:\\Users\\YourName\\LinkToFolder C:\\path\\to\\original\\folder Unveiling the Link\u0026rsquo;s Secret Finding out where a symlink points to in Windows is more about checking the symlink\u0026rsquo;s properties in File Explorer than a specific command.\nUnlinking Deleting a symlink in Windows doesn\u0026rsquo;t hurt the original, just like in Linux:\ndel C:\\path\\to\\symlink Wrapping Up Symlinks are a fantastic tool to streamline your workflow, whether you\u0026rsquo;re a Linux enthusiast or a Windows aficionado. They help keep your files organized without the mess of duplicates. Next time you find yourself reaching for the copy-paste, consider if a symlink might be the smarter move.\n","permalink":"https://kristoffer.dev/blog/symbolic-link/","summary":"\u003cp\u003eSymlinks (symbolic links) are like the ultimate shortcuts in the world of operating systems. They help you point to files and folders from multiple locations without creating duplicates. Let\u0026rsquo;s dive into how you can create and manage symlinks, making your life easier whether you\u0026rsquo;re navigating the Linux terminal or the Windows Command Prompt.\u003c/p\u003e\n\u003ch2 id=\"linux-creating-and-managing-symlinks\"\u003eLinux: Creating and Managing Symlinks\u003c/h2\u003e\n\u003ch3 id=\"making-a-symlink\"\u003eMaking a Symlink\u003c/h3\u003e\n\u003cp\u003eIn Linux, creating a symlink is as simple as striking a few keys:\u003c/p\u003e","title":"Link Up Your Life: Symlinks in Linux and Windows"},{"content":"This is a set of commands for installing Zsh on Ubuntu 18, 20 and 22.\nUpdating the system sudo apt update \u0026amp;\u0026amp; sudo apt dist-upgrade -y Installing Zsh sudo apt install zsh zsh --version Installing Oh-My-Zsh Plugin On-My-Zsh plugin provides some amazing shell enhancements to ZSH.\nYou can install the plugin by typing this command in your terminal: sudo apt install git-core curl fonts-powerline\nsh -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\u0026#34; Link https://ohmyz.sh/#install\nPLugins Open the zsh config file like this:\nnano ~/.zshrc Default theme I use the dieter theme at the moment. See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes for more themes.\nZSH_THEME=\u0026#34;dieter\u0026#34; Reload Oh-my-zsh Use the command under for reloading zsh and activate the changes.\nsource ~/.zshrc ","permalink":"https://kristoffer.dev/blog/oh-my-szh/","summary":"\u003cp\u003eThis is a set of commands for installing Zsh on Ubuntu 18, 20 and 22.\u003c/p\u003e\n\u003ch2 id=\"updating-the-system\"\u003eUpdating the system\u003c/h2\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003esudo apt update \u003cspan class=\"o\"\u003e\u0026amp;\u0026amp;\u003c/span\u003e sudo apt dist-upgrade -y\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\n\u003ch2 id=\"installing-zsh\"\u003eInstalling Zsh\u003c/h2\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003esudo apt install zsh\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ezsh --version\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\n\u003ch2 id=\"installing-oh-my-zsh-plugin\"\u003eInstalling Oh-My-Zsh Plugin\u003c/h2\u003e\n\u003cp\u003eOn-My-Zsh plugin provides some amazing shell enhancements to ZSH.\u003c/p\u003e\n\u003cp\u003eYou can install the plugin by typing this command in your terminal:\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003esudo apt install git-core curl fonts-powerline\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003esh -c \u003cspan class=\"s2\"\u003e\u0026#34;\u003c/span\u003e\u003cspan class=\"k\"\u003e$(\u003c/span\u003ecurl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh\u003cspan class=\"k\"\u003e)\u003c/span\u003e\u003cspan class=\"s2\"\u003e\u0026#34;\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\n\u003cp\u003eLink \u003ca href=\"https://ohmyz.sh/#install\"\u003ehttps://ohmyz.sh/#install\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"plugins\"\u003ePLugins\u003c/h2\u003e\n\u003cp\u003eOpen the zsh config file like this:\u003c/p\u003e","title":"Installing Oh-my-Zsh on Ubuntu"},{"content":"Setting Up SSH Keys on a New Installation New installations often require setting up SSH keys. Here’s a quick guide on how I configured my new computer with existing SSH keys that were already generated and installed on a remote server.\nStep 1: Copy the Private Key First, navigate to the SSH directory:\ncd ~/.ssh/ Step 2: Set the Correct Access Privileges It’s important to set the correct read and write permissions for your private key:\nchmod 700 ~/.ssh/id_rsa Step 3: Add the SSH Key to the SSH Agent After placing the private key in the correct folder, add it to the SSH agent using the built-in ssh-add function:\nssh-add ~/.ssh/id_rsa Step 4: Connect to the Remote Server Now you’re ready to connect:\nssh root@129.0.0.1 Have fun!\nThanks for reading!\nFollow me on Twitter for more updates.\nSources UpCloud Tutorial: Use SSH Keys for Authentication ","permalink":"https://kristoffer.dev/blog/setup-ssh-keys/","summary":"\u003ch1 id=\"setting-up-ssh-keys-on-a-new-installation\"\u003eSetting Up SSH Keys on a New Installation\u003c/h1\u003e\n\u003cp\u003eNew installations often require setting up SSH keys. Here’s a quick guide on how I configured my new computer with existing SSH keys that were already generated and installed on a remote server.\u003c/p\u003e\n\u003ch2 id=\"step-1-copy-the-private-key\"\u003eStep 1: Copy the Private Key\u003c/h2\u003e\n\u003cp\u003eFirst, navigate to the SSH directory:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"nb\"\u003ecd\u003c/span\u003e ~/.ssh/\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003ch2 id=\"step-2-set-the-correct-access-privileges\"\u003eStep 2: Set the Correct Access Privileges\u003c/h2\u003e\n\u003cp\u003eIt’s important to set the correct read and write permissions for your private key:\u003c/p\u003e","title":"Add Private SSH Key(s)"},{"content":"Hey Here\u0026rsquo;s just a simple cheatsheet for the most basic git commands.\ngit init View gist on GitHub: git-init.sh git add file/files View gist on GitHub: git-add.sh git status View gist on GitHub: git-status.sh git commit View gist on GitHub: git-commit.sh git push View gist on GitHub: git-push.sh git pull View gist on GitHub: git-pull.sh git clone View gist on GitHub: git-clone.sh git branch View gist on GitHub: git-branch.sh https://github.github.com/training-kit/downloads/github-git-cheat-sheet.pdf\n","permalink":"https://kristoffer.dev/blog/basic-git-commands/","summary":"\u003ch2 id=\"hey\"\u003eHey\u003c/h2\u003e\n\u003cp\u003eHere\u0026rsquo;s just a simple cheatsheet for the most basic git commands.\u003c/p\u003e\n\u003ch3 id=\"git-init\"\u003egit init\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-init.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-init.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003ch3 id=\"git-add-filefiles\"\u003egit add file/files\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-add.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-add.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003ch3 id=\"git-status\"\u003egit status\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-status.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-status.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003ch3 id=\"git-commit\"\u003egit commit\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-commit.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-commit.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003ch3 id=\"git-push\"\u003egit push\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-push.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-push.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003ch3 id=\"git-pull\"\u003egit pull\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-pull.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-pull.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003ch3 id=\"git-clone\"\u003egit clone\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-clone.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-clone.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003ch3 id=\"git-branch\"\u003egit branch\u003c/h3\u003e\n\u003cdiv class=\"gist-embed\"\u003e\n    \u003cscript src=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e.js?file=git-branch.sh\"\u003e\u003c/script\u003e\n    \u003cnoscript\u003e\n        \u003ca href=\"https://gist.github.com/KristofferRisa/97e17b38a0cace8d6b26edeb10dc279e\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e\n            View gist on GitHub: git-branch.sh\n        \u003c/a\u003e\n    \u003c/noscript\u003e\n\u003c/div\u003e\n\n\u003cp\u003e\u003ca href=\"https://github.github.com/training-kit/downloads/github-git-cheat-sheet.pdf\"\u003ehttps://github.github.com/training-kit/downloads/github-git-cheat-sheet.pdf\u003c/a\u003e\u003c/p\u003e","title":"Basic git commands"},{"content":"Here are some common Linux commands:\nls - used to list the contents of a directory cd - used to change the current working directory mv - used to move or rename files and directories mkdir - used to create a new directory rm - used to delete files and directories chmod - used to change the permissions of a file or directory sudo - used to execute a command with superuser privileges apt-get - used to install and manage software packages on Linux systems grep - used to search for text patterns in files man - used to display the manual pages for a command These are just a few examples - there are many more Linux commands that can be used for various tasks.\nMore commands The cp command is used to copy files and directories in Linux. It has the following syntax:\ncp [OPTION]... SOURCE DEST Where SOURCE is the file or directory you want to copy, and DEST is the destination where you want the copy to be placed.\nFor example, to copy a file named file1.txt to a directory named /tmp, you would use the following command:\ncp file1.txt /tmp The cp command also has many options that can be used to modify its behavior. For example, the -r option can be used to copy directories recursively, and the -p option can be used to preserve the original file\u0026rsquo;s permissions, ownership, and timestamps.\nrsync rsync [OPTION]... SRC DEST rsync -avz /src /dest The rsync command also has many options that can be used to modify its behavior. For example, the -a option can be used to preserve the original file\u0026rsquo;s permissions, ownership, and timestamps, and the -z option can be used to compress the data during transfer.\nfilter programs based on listening port lsof -i tcp:8000 how to find the Process ID (PID) that are running ps aux | grep firefox search in files in a folder grep -rni \u0026#34;word\u0026#34; * Kilder http://www.cyberciti.biz/faq/copy-command/ https://www.techonthenet.com/linux/commands/rsync.php\u0026lt; http://www.cyberciti.biz/faq/linux-delete-folders/ ","permalink":"https://kristoffer.dev/blog/linux-commands/","summary":"\u003cp\u003eHere are some common Linux commands:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003els\u003c/strong\u003e - used to list the contents of a directory\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ecd\u003c/strong\u003e - used to change the current working directory\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003emv\u003c/strong\u003e - used to move or rename files and directories\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003emkdir\u003c/strong\u003e - used to create a new directory\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003erm\u003c/strong\u003e - used to delete files and directories\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003echmod\u003c/strong\u003e - used to change the permissions of a file or directory\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003esudo\u003c/strong\u003e - used to execute a command with superuser privileges\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eapt-get\u003c/strong\u003e - used to install and manage software packages on Linux systems\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003egrep\u003c/strong\u003e - used to search for text patterns in files\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eman\u003c/strong\u003e - used to display the manual pages for a command\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eThese are just a few examples - there are many more Linux commands that can be used for various tasks.\u003c/p\u003e","title":"Linux Commands"}]