Posts in category “Linux”

AI Prompt for Git Commit Messages

Analyze the following code changes and generate a concise, meaningful git commit message that:

1. Summarizes the main purpose or impact of the changes
2. Is no longer than 100 characters (150 characters maximum if absolutely necessary)
3. Uses present tense and imperative mood (e.g., "Add feature" not "Added feature")
4. Focuses on the "what" and "why" rather than the "how"

Provide ONLY the commit message itself, without any additional text, explanations, or formatting.

Code changes:

I have used it in my bash script ci and it works very well! ...

How to Avoid SSH's "Are you sure you want to continue connecting?" Prompt

If you're tired of seeing the "Are you sure you want to continue connecting (yes/no/[fingerprint])?" prompt every time you SSH into a new server, you're not alone. This security feature, while important, can be a bit of a nuisance for system administrators and developers who frequently connect to new machines. This is especially true when they wish to run a remote command on a newly trusted machine. Let's explore how to streamline this process without completely compromising security.

Understanding the Prompt

First, it's important to understand why this prompt appears. It's a security measure designed to protect you from man-in-the-middle attacks by verifying the authenticity of the server you're connecting to. However, in controlled environments or for non-critical systems, you might want to bypass this prompt.

The Quick Fix: StrictHostKeyChecking

One simple way to avoid this prompt is by using the StrictHostKeyChecking option. You can add this to your SSH command like this:

ssh -o StrictHostKeyChecking=accept-new user@hostname

But what if you want to make this change permanent? You can add it to your SSH config file:

  1. Open or create your SSH config file:

    vim ~/.ssh/config
    
  2. Add the following line:

    StrictHostKeyChecking accept-new
    

This setting will automatically accept and save new host keys without prompting, while still warning you if a known host's key has changed.

Security Considerations

While this method is convenient, it's important to understand the security implications:

  • It automatically accepts keys from new, unknown hosts.
  • It still protects you from potential man-in-the-middle attacks on known hosts.
  • It assumes you trust your network and the new hosts you're connecting to.

When to Use This Method

This approach is best suited for:

  • Environments where you frequently connect to new, trusted hosts.

  • Controlled, secure networks.

  • Scenarios where the convenience outweighs the risk of not manually verifying each new host.

使用 rsync 增量备份文件

通常 A 机器和 B 机器已经在 ~/.ssh/config 中定义并建立了信任关系,因此我们可以直接使用主机别名,而不需要使用 user@B_machine 的方式。

sync_files.sh 脚本

#!/bin/bash

# 定义源和目标目录
SOURCE_DIR="/data/files/"
DEST_DIR="B_machine:~/backups/A_machine/data/files"

# 执行 rsync 命令
rsync -av --exclude='tmp/' "$SOURCE_DIR" "$DEST_DIR"
  1. 确保脚本有执行权限:

    chmod +x sync_files.sh
    
  2. 使用 cron 设置每五分钟执行一次该脚本:

    crontab -e
    

    添加:

    */5 * * * * /path/to/sync_files.sh
    

/var/log/journal 占了2.8GB,有没有办法一劳永逸的解决这个问题?

在 Linux 系统上,/var/log/journal 目录通常用来存储系统日志。如果这个目录占用了过多的空间,可以通过以下几种方法来管理和限制日志的大小:

1. 修改 journald 配置

你可以通过修改 journald 的配置文件来设置日志的最大大小和保留时间:

  1. 打开终端。

  2. 编辑 journald 配置文件:

    sudo nano /etc/systemd/journald.conf
    
  3. 在文件中添加或修改以下配置项:

    [Journal]
    SystemMaxUse=500M
    SystemKeepFree=100M
    SystemMaxFileSize=50M
    SystemMaxFiles=10
    

    这些选项的含义如下:

    • SystemMaxUse:限制日志的总大小。
    • SystemKeepFree:保留的磁盘空间。
    • SystemMaxFileSize:每个日志文件的最大大小。
    • SystemMaxFiles:最大日志文件数。
  4. 保存文件并退出编辑器。

  5. 重新启动 journald 服务以应用更改:

    sudo systemctl restart systemd-journald
    

2. 清理现有日志

如果你想立即释放一些空间,可以手动清理旧的日志文件:

sudo journalctl --vacuum-size=500M

这条命令会将日志文件的总大小限制在 500MB。

3. 定期清理日志

你可以使用 cron 任务来定期清理日志。例如,可以设置一个每天执行的任务来保持日志的大小在一个合理的范围内。

  1. 打开 cron 编辑器:

    crontab -e
    
  2. 添加以下行以每天清理日志:

    0 0 * * * /usr/bin/journalctl --vacuum-time=7d
    

    这样设置后,系统将每天清理超过 7 天的日志。

Set Up a 2GB Swap on a Remote VPS with a Simple Script

Running a small VPS with limited memory can be frustrating, especially when processes get killed due to low memory. A quick and easy way to help prevent this is by setting up a swap file.

This script will

  1. Checks if swap already exists on the remote machine.
  2. If not, it creates a 2GB swap file and enables it.
  3. Adds the swap file to /etc/fstab to make it permanent.

The script uses scp to copy a temporary script to the remote machine and ssh to execute it. Here’s the full script:

#!/bin/bash

# Check if machine name is provided
if [ -z "$1" ]; then
  echo "Usage: $0 <machine-name>"
  exit 1
fi

REMOTE_MACHINE=$1
SWAPFILE=/swapfile
SIZE=2048

# Generate remote script content
REMOTE_SCRIPT=$(cat <<EOF
#!/bin/bash
if swapon --show | grep -q "$SWAPFILE"; then
  echo "Swap is already enabled on $SWAPFILE"
  exit 0
fi

sudo dd if=/dev/zero of=$SWAPFILE bs=1M count=$SIZE
sudo chmod 600 $SWAPFILE
sudo mkswap $SWAPFILE
sudo swapon $SWAPFILE

if ! grep -q "$SWAPFILE" /etc/fstab; then
  echo "$SWAPFILE none swap sw 0 0" | sudo tee -a /etc/fstab
fi
free -h
EOF
)

# Save remote script locally
echo "$REMOTE_SCRIPT" > /tmp/create_swap.sh

# Copy script to remote machine and execute it
scp /tmp/create_swap.sh $REMOTE_MACHINE:/tmp/
ssh $REMOTE_MACHINE "bash /tmp/create_swap.sh"

# Cleanup
ssh $REMOTE_MACHINE "rm /tmp/create_swap.sh"

How It Works

  • The script checks if the swap file already exists by running swapon --show on the remote machine.
  • If swap is already enabled, it exits.
  • Otherwise, it creates a 2GB swap file (/swapfile), sets the right permissions, and adds it to /etc/fstab so it’s automatically enabled after a reboot.

Usage

  1. Save the script as create_swap.sh and make it executable:

    chmod +x create_swap.sh
    
  2. Run the script with the remote machine name:

    ./create_swap.sh <remote-machine>
    

And that's it! The script takes care of everything for you, ensuring your VPS has a swap file ready to handle memory spikes.