Linux 挂载 NTFS 移动硬盘报错:mount: wrong fs type, bad option, bad superblock
使用 dmesg | tail 命令查看最新的日志:
root@debian:/home/linxin# dmesg | tail
[799922.637697] ntfs3(sda1): It is recommened to use chkdsk.
[799922.643453] ntfs3(sda1): volume is dirty and "force" flag is not set!
[799944.456400] ntfs3(sda1): It is recommened to use chkdsk.
[799944.462180] ntfs3(sda1): volume is dirty and "force" flag is not set!
[799962.495381] ntfs3(sda1): It is recommened to use chkdsk.
[799962.501035] ntfs3(sda1): volume is dirty and "force" flag is not set!
[799979.101860] ntfs3(sda1): It is recommened to use chkdsk.
[799979.106927] ntfs3(sda1): volume is dirty and "force" flag is not set!
[800154.305179] ntfs3(sda1): It is recommened to use chkdsk.
[800154.310856] ntfs3(sda1): volume is dirty and "force" flag is not set!
日志中反复出现的 volume is dirty 是核心问题。较新的 Linux 内核默认使用 ntfs3 驱动来挂载 NTFS 分区,这个驱动非常注重数据安全。如果这块移动硬盘之前在 Windows 上没有安全弹出(直接拔掉),或者 Windows 开启了“快速启动(Fast Startup)”导致处于休眠状态,NTFS 文件系统就会留下一个“脏标记(dirty flag)”。Linux 为了防止数据损坏,会拒绝挂载并提示你去 Windows 下运行 chkdsk 修复。
如果你手头没有 Windows 电脑,或者不想来回插拔,我们可以直接在 Linux 下使用 ntfsfix 工具来清除这个脏标记。
确认出问题的分区名称(我的设备是 /dev/sda1),然后执行修复命令:
root@debian:/home/linxin# ntfsfix -d /dev/sda1
Mounting volume... OK
Processing of $MFT and $MFTMirr completed successfully.
Checking the alternate boot sector... OK
NTFS volume version is 3.1.
NTFS partition /dev/sda1 was processed successfully.
看到 processed successfully 就说明修复成功
阿里云512M ECS由于内存不足导致FRP频繁502
一开始感觉是源服务器到阿里云的网络问题导致的中转502,后面一看log发现频繁OOM:

应该是因为阿里云云盾占用的内存过多,导致FRP经常被杀
按下面的流程卸载掉云盾就行了:https://www.alibabacloud.com/help/zh/security-center/user-guide/uninstall-the-security-center-agent
最开始没往内存的方向去想是因为已经挂了2G的SWAP,结果现在看来还是因为内存不足的原因(就这样在天天被杀的情况下用了快半年)
WordPress 插件 | LLM API垃圾评论过滤器
Link: https://github.com/WiDayn/llm-spam-filter?tab=readme-ov-file
大概就是调接口,输入文章标题+评论信息,让LLM判断评论是否为SPAM
不需要用太大的模型,硅基流动里面免费的模型完全足够,所以也算是零成本
截图:


GPU多卡压测脚本(矩阵乘法, Torch-based)
import torch
import torch.multiprocessing as mp
import time
import math
def stress_task(gpu_id):
try:
device = torch.device(f"cuda:{gpu_id}")
torch.cuda.set_device(device) # 确保上下文正确
free_mem, total_mem = torch.cuda.mem_get_info(device)
# 3. 动态计算矩阵大小
# 设定占用目标为剩余显存的 65% (留 35% 给 PyTorch内核开销,防止 OOM)
# 此时需要存储 X, Y 以及结果 Z,共 3 个矩阵。每个 float32 占 4 字节。
# 公式: (N * N * 4 bytes) * 3 matrices <= free_mem * 0.65
target_mem = free_mem * 0.65
matrix_size = int(math.sqrt(target_mem / 12))
free_gb = free_mem / (1024**3)
total_gb = total_mem / (1024**3)
print(f"[GPU {gpu_id}] Total: {total_gb:.2f}GB | Free: {free_gb:.2f}GB")
print(f"[GPU {gpu_id}] Calculated Matrix Size: {matrix_size}x{matrix_size} (Target utilization: ~85%)")
print(f"[GPU {gpu_id}] Allocating memory...")
x = torch.randn(matrix_size, matrix_size, device=device)
y = torch.randn(matrix_size, matrix_size, device=device)
print(f"[GPU {gpu_id}] Starting loop...")
while True:
z = torch.mm(x, y)
except RuntimeError as e:
print(f"[GPU {gpu_id}] Error: {e}")
if "out of memory" in str(e):
print(f"[GPU {gpu_id}] Auto-size was too aggressive. Try lowering the 0.85 factor in code.")
except KeyboardInterrupt:
pass
if __name__ == '__main__':
if not torch.cuda.is_available():
print("CUDA is not available!")
exit()
num_gpus = torch.cuda.device_count()
print(f"Found {num_gpus} GPUs. Auto-calculating load for each...")
processes = []
mp.set_start_method('spawn', force=True)
print("Starting processes... (Press Ctrl+C to stop)")
start_time = time.time()
try:
for i in range(num_gpus):
p = mp.Process(target=stress_task, args=(i,))
p.start()
processes.append(p)
for p in processes:
p.join()
except KeyboardInterrupt:
print(f"\nStop signal received. Terminating all processes...")
for p in processes:
if p.is_alive():
p.terminate()
print(f"All stopped. Duration: {time.time() - start_time:.2f} seconds")
Cluade Code使用国产平台(如MiniMax,GLM)的配置流程
首先安装node.js: https://nodejs.org/zh-cn/download
安装后通过npm安装pnpm:npm install -g pnpm@latest-10
然后用pnpm安装Cluade Code: pnpm install -g @anthropic-ai/claude-code
网络问题请换源: pnpm config set registry https://registry.npmmirror.com/
然后编辑~/.claude/settings.json,内容国产平台的教程上应该有
由于Cluade Code禁止了一些国家和地区,直接运行会有如下的提示:

,所以需要绕过其验证,编辑~/.claude.json,在json的根目录(也就是{}内)增加一行: “hasCompletedOnboarding”: true,如:

然后保存退出来,运行:
node --eval "
const homeDir = os.homedir();
const filePath = path.join(homeDir, '.claude.json');
if (fs.existsSync(filePath)) {
const content = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
fs.writeFileSync(filePath,JSON.stringify({ …content, hasCompletedOnboarding: true }, 2), 'utf-8');
} else {
fs.writeFileSync(filePath,JSON.stringify({ hasCompletedOnboarding: true }), 'utf-8');
}"
之后进入项目根目录输入claude即可正常使用:

MiKTex找不到某个宏包,但是CTAN上有的安装方法
比如`sttools`这个宏包目前就无法在MiKTex的Console里安装
解决方法为先去CTAN上下载得到源代码的安装包,解压后用命令行编译压缩包里的dtx文件,如
tex .\stfloats.dtx
然后将编译后得到的.sty移动到需要编译的项目tex的同目录(如Menuscript.tex)即可
Latex在Windows下的安装部署[MiKTex+Cursor(VS Code)]
为什么选择MiKTex而不是Tex Live?
Tex Live的安装包大概在6GB左右,里面包含了很多的Package但是实际上相当一部分并不会用上,安装时间方面也不乏吐槽,而MiKTex提供了最小化的安装包,大概100M+就可以
继续阅读“Latex在Windows下的安装部署[MiKTex+Cursor(VS Code)]”Linux systemd.service配置
1. 创建your-name.service文件
sudo nano /etc/systemd/system/your-name.service
2. 填写service内容
[Unit]
Description=/etc/your-name.local Compatibility
After=network.target
[Service]
ExecStart=/your-cmd-exc-path
[Install]
WantedBy=multi-user.target
3. 开机自启并启动service
sudo systemctl enable your-name.service
sudo systemctl start rc-local.service
分割指标计算全家桶
原图像+mask外扩
def crop_image_and_mask(image_path, mask_path, mask_expansion=0):
# 加载图像和掩膜数据
image_nii = nib.load(image_path)
mask_nii = nib.load(mask_path)
# 获取图像和掩膜的 numpy 数组
image_data = image_nii.get_fdata()
mask_data = mask_nii.get_fdata()
# 获取图像的空间分辨率(单位为毫米)
voxel_size = image_nii.header.get_zooms() # 获取像素尺寸(x, y, z方向上的尺寸,单位是毫米)
# 转换 mask_expansion 从毫米到像素
expansion_pixels = [int(mask_expansion / size) for size in voxel_size]
# 获取掩膜中非零值的坐标
mask_nonzero_coords = np.argwhere(mask_data > 0)
# 找到掩膜的最小和最大坐标
min_coords = mask_nonzero_coords.min(axis=0)
max_coords = mask_nonzero_coords.max(axis=0)
# 扩展掩膜的范围
min_coords_expanded = np.maximum(min_coords - expansion_pixels, 0)
max_coords_expanded = np.minimum(max_coords + expansion_pixels, image_data.shape)
# 截取图像和掩膜中对应的部分
cropped_image_data = image_data[min_coords_expanded[0]:max_coords_expanded[0] + 1,
min_coords_expanded[1]:max_coords_expanded[1] + 1,
min_coords_expanded[2]:max_coords_expanded[2] + 1]
cropped_mask_data = mask_data[min_coords_expanded[0]:max_coords_expanded[0] + 1,
min_coords_expanded[1]:max_coords_expanded[1] + 1,
min_coords_expanded[2]:max_coords_expanded[2] + 1]
# 打印扩展后的起始和终止坐标
print(f"Expanded crop start coordinates: {min_coords_expanded}")
print(f"Expanded crop end coordinates: {max_coords_expanded}")
# 创建新的 nifti 图像
cropped_image_nii = nib.Nifti1Image(cropped_image_data, image_nii.affine)
cropped_mask_nii = nib.Nifti1Image(cropped_mask_data, mask_nii.affine)
# 保存截取后的图像和掩膜
nib.save(cropped_image_nii, image_path.replace('image', f'image_ROI_{mask_expansion}_{mask_expansion}_{mask_expansion}'))
nib.save(cropped_mask_nii, mask_path.replace('label', f'label_ROI_{mask_expansion}_{mask_expansion}_{mask_expansion}'))
