Posts in category “Tutorial”

Immich Image Compression Proxy: Save Storage Space Transparently

Immich always stores original photos/videos, which quickly fills up your disk. This guide shows how to automatically compress images during upload without modifying Immich itself.

This solution is based on the excellent work by JamesCullum. Without his innovative proxy approach, this wouldn't be possible.

How It Works

A proxy container sits between uploads and Immich server:

  • Intercepts image uploads
  • Resizes images to specified dimensions
  • Forwards compressed images to Immich
  • Completely transparent to clients

Setup

1. Add Proxy to Docker Compose

Add this service to your docker-compose.yml:

services:
  upload-proxy:
    container_name: upload_proxy
    image: shukebeta/multipart-upload-proxy-with-compression:latest
    environment:
      - IMG_MAX_NARROW_SIDE=1600  # Smart resize: constrains the smaller dimension (recommended)
      - JPEG_QUALITY=85           # JPEG compression quality (1-100, balances size and quality, 85 is good enough for me)
      - FORWARD_DESTINATION=http://immich-server:2283/api/assets
      - FILE_UPLOAD_FIELD=assetData
      - LISTEN_PATH=/api/assets
    ports:
      - "6743:6743"
    restart: always
    depends_on:
      - immich-server

2. Update Nginx Configuration

Critical: Simple routing doesn't work because the proxy only handles uploads, not image retrieval. Use this precise configuration:

# Only match exactly /api/assets (upload endpoint)
location = /api/assets {
    # Method check: only POST goes to upload proxy
    if ($request_method = POST) {
        proxy_pass http://your-server:6743;
        break;  # Critical: prevents fallthrough
    }
    # Non-POST (like GET lists) go to main service
    proxy_pass http://your-server:2283;
}

# /api/assets/xxxxx (with suffix - thumbnails, full images, ID access) all go to main service
location /api/assets/ {
    proxy_pass http://your-server:2283;
}

# Everything else
location / {
    proxy_pass http://your-server:2283;
}

Why this configuration is essential:

  • Proxy only processes multipart/form-data uploads
  • GET requests for images must bypass the proxy
  • location = /api/assets matches uploads exactly
  • location /api/assets/ matches image retrieval URLs
  • break prevents nginx from processing additional location blocks

3. Deploy Changes

# Stop containers
docker compose down

# Start with new configuration
docker compose up -d

# Reload nginx
nginx -t && nginx -s reload

Resize Strategies

Smart Narrow-Side Constraint (Recommended)

The new IMG_MAX_NARROW_SIDE parameter provides more intelligent resizing by constraining only the smaller dimension:

- IMG_MAX_NARROW_SIDE=1600  # Constrains the narrower side to 1600px

Examples:

  • Panorama (4000×1200)4000×1200 (no change, narrow side already ≤1600)
  • Portrait (1200×3000)`1200×3000 (no change, narrow side already ≤1600)
  • Square (2400×2400)1600×1600 (both sides constrained)

Legacy Bounding Box Strategy

The original width/height constraints create a bounding box:

- IMG_MAX_WIDTH=1080
- IMG_MAX_HEIGHT=1920

Common Presets

General purpose (recommended):

- IMG_MAX_NARROW_SIDE=1600
- JPEG_QUALITY=85

High quality for professionals:

- IMG_MAX_NARROW_SIDE=2400  
- JPEG_QUALITY=90

Note: IMG_MAX_NARROW_SIDE takes priority over IMG_MAX_WIDTH/IMG_MAX_HEIGHT when set to a positive value.

Verification

  1. Check proxy is running: docker ps | grep upload_proxy
  2. Upload a large image through your Immich app
  3. Check storage folder - image should be smaller than original
  4. Verify image quality meets your standards

Why This Works

  • Security: All authentication headers pass through untouched
  • Compatibility: Uses standard HTTP - works with any client
  • Transparency: Immich doesn't know compression happened

Troubleshooting

Uploads fail: Check nginx routing and proxy container logs Images not compressed: Check nginx routing - requests may be bypassing the proxy Poor quality: Increase IMG_MAX_WIDTH and IMG_MAX_HEIGHT values

Why This Proxy Approach?

Immich developers have explicitly rejected adding compression features to the core application. This proxy solution is currently the only practical way to reduce storage usage while maintaining full compatibility with all Immich clients.

Click Here to check my working configuration.

.NET Deployment Issue: Ghost Dependency

Symptom

  • .NET 8 app with Polly fails on deploy: Could not load file or assembly 'Microsoft.Extensions.Http, Version=9.0.0.0'

Checks

  1. Dependencies

    • No preview packages in .csproj
    • Locked Microsoft.Extensions.Http to 8.0.0
    • Local build clean → Not a package issue
  2. Environment

    • CI/CD fixed to .NET 8 SDK
    • Cleared ~/.dotnet and ~/.nuget
    • Removed caches → Environment clean

Root Cause

  • Deploy script only overwrote files, never cleaned target dir
  • Old HappyNotes.Api.deps.json misled runtime to request v9.0.0.0

Lesson Always wipe target dir before publish:

rm -rf /your/target/directory/*

GitHub CLI Multi-Account Auto-Switcher: Zero-Config Solution

If you juggle work and personal GitHub accounts like I do, constantly checking which account is active before running gh pr create gets old fast. Here's a perfect solution that completely eliminates this friction.

The Problem

Working with multiple GitHub accounts means:

  • Forgetting which account is currently active
  • Getting "No default remote repository" errors
  • Manually running gh auth switch and gh repo set-default
  • Accidentally creating PRs with the wrong account

The Solution

Create a gh wrapper script that automatically detects the current repository's owner, switches to the correct GitHub account, and sets the default repository before executing any command.

Implementation

#!/bin/bash

# Path to original gh binary
ORIGINAL_GH="/usr/local/bin/gh"  # Adjust for your system

# Get current repository's remote URL
remote_url=$(git remote get-url origin 2>/dev/null)

# If not in a git repo, pass through to original gh
if [ $? -ne 0 ]; then
    exec "$ORIGINAL_GH" "$@"
fi

# Extract full repository name (owner/repo) - remove .git suffix
if [[ $remote_url =~ github\.com[:/]([^/]+/[^/]+) ]]; then
    repo_full_name="${BASH_REMATCH[1]}"
    repo_full_name="${repo_full_name%.git}"
    repo_owner=$(echo "$repo_full_name" | cut -d'/' -f1)
else
    exec "$ORIGINAL_GH" "$@"
fi

# Get current active GitHub account
current_user=$("$ORIGINAL_GH" api user --jq '.login' 2>/dev/null)

if [ $? -ne 0 ]; then
    exec "$ORIGINAL_GH" "$@"
fi

# Switch account if needed
if [ "$repo_owner" != "$current_user" ]; then
    echo "→ Repository belongs to $repo_owner, switching from $current_user..." >&2
    "$ORIGINAL_GH" auth switch >/dev/null 2>&1
fi

# Set default repository to avoid "No default remote repository" errors
"$ORIGINAL_GH" repo set-default "$repo_full_name" >/dev/null 2>&1

# Execute original command
exec "$ORIGINAL_GH" "$@"

Setup

  1. Find your original gh path:
which gh
# Use this path in the ORIGINAL_GH variable
  1. Create the wrapper script:
# Save script as ~/.local/bin/gh
chmod +x ~/.local/bin/gh
  1. Adjust PATH priority:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/.local/bin:$PATH"
  1. Verify setup:
source ~/.bashrc
which gh  # Should show ~/.local/bin/gh

Usage

All GitHub CLI commands now automatically use the correct account:

# In personal repository
gh pr create  # Uses personal account, sets correct default repo

# In work repository  
gh pr create  # Uses work account, sets correct default repo

# All other commands work transparently
gh issue list
gh pr view
gh repo clone username/repo

Advanced: Working with Forks

For forked repositories where you want to view PRs in the upstream repo:

# View your PRs in the upstream repository
gh pr list --author @me --repo upstream-owner/repo-name

# Or temporarily switch to upstream
gh repo set-default upstream-owner/repo-name
gh pr view [PR_NUMBER]

How It Works

  • Extracts repository owner from the current directory's origin remote URL
  • Compares repository owner with currently active GitHub account
  • Automatically runs gh auth switch if accounts don't match
  • Always sets the correct default repository to prevent CLI errors
  • Transparently proxies all other gh commands

Benefits

  • Zero configuration required - works out of the box
  • Zero habit changes needed - still use gh commands normally
  • Eliminates common errors - no more "No default remote repository" messages
  • Multi-account friction eliminated - never think about which account is active again

Perfect for developers who work across multiple GitHub organizations or maintain both work and personal projects.

Mocking Node.js Path Separators: The Dependency Injection Solution

The Problem

Node.js path utilities like path.sep, path.join() are hardcoded to the current platform and readonly - you can't mock them for cross-platform testing:

// Instead of this brittle approach:
function createTempFile(name: string, separator: string) {
  return 'tmp' + separator + name; // Manual string manipulation
}

// Use dependency injection:
function createTempFile(name: string, pathImpl = path) {
  return pathImpl.join('tmp', name);
}

// Now you can test both platforms reliably:
createTempFile('data.json', path.win32)  // → tmp\data.json
createTempFile('data.json', path.posix)  // → tmp/data.json

Why This Works

Node.js provides path.win32 and path.posix as separate implementations. Instead of fighting the platform dependency, embrace it through clean dependency injection. Test Windows logic on Linux, test Unix logic on Windows - no mocking needed.

Technical Note: Testing ILogger with NSubstitute

1. The Challenge

Directly verifying ILogger extension methods (e.g., _logger.LogError("...")) with NSubstitute is difficult. These methods resolve to a single, complex generic method on the ILogger interface, making standard Received() calls verbose and brittle.

void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter);

2. The Solution: Inspect the State Argument

The most robust solution is to inspect the state argument passed to the core Log<TState> method. When using structured logging, this state object is an IEnumerable<KeyValuePair<string, object>> that contains the full context of the log call, including the original message template.

3. Implementation: A Reusable Helper Method

To avoid repeating complex verification logic, create a static extension method for ILogger<T>.

LoggerTestExtensions.cs

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.Extensions.Logging;
using NSubstitute;
using FluentAssertions;

public static class LoggerTestExtensions
{
    /// <summary>
    /// Verifies that a log call with a specific level and message template was made.
    /// </summary>
    /// <typeparam name="T">The type of the logger's category.</typeparam>
    /// <param name="logger">The ILogger substitute.</param>
    /// <param name="expectedLogLevel">The expected log level (e.g., LogLevel.Error).</param>
    /// <param name="expectedMessageTemplate">The exact message template string to verify.</param>
    public static void VerifyLog<T>(this ILogger<T> logger, LogLevel expectedLogLevel, string expectedMessageTemplate)
    {
        // Find all calls to the core Log method with the specified LogLevel.
        var logCalls = logger.ReceivedCalls()
            .Where(call => call.GetMethodInfo().Name == "Log" &&
                           (LogLevel)call.GetArguments()[0]! == expectedLogLevel)
            .ToList();

        logCalls.Should().NotBeEmpty($"at least one log call with level {expectedLogLevel} was expected.");

        // Check if any of the found calls match the message template.
        var matchFound = logCalls.Any(call =>
        {
            var state = call.GetArguments()[2];
            if (state is not IEnumerable<KeyValuePair<string, object>> kvp) return false;
            
            return kvp.Any(p => p.Key == "{OriginalFormat}" && p.Value.ToString() == expectedMessageTemplate);
        });

        matchFound.Should().BeTrue($"a log call with the message template '{expectedMessageTemplate}' was expected but not found.");
    }
}

4. How to Use in a Unit Test

Step 1: Arrange In your test, create a substitute for ILogger<T> and inject it into your System Under Test (SUT).

// In your test class
private readonly ILogger<MyService> _logger;
private readonly MyService _sut;

public MyServiceTests()
{
    _logger = Substitute.For<ILogger<MyService>>();
    _sut = new MyService(_logger);
}

Step 2: Act Execute the method that is expected to produce a log entry.

[Fact]
public void DoWork_WhenErrorOccurs_ShouldLogError()
{
    // Act
    _sut.DoWorkThatFails();

    // ...
}

Step 3: Assert Use the VerifyLog extension method for a clean and readable assertion.

    // Assert
    _logger.VerifyLog(LogLevel.Error, "An error occurred while doing work for ID: {WorkId}");
}

5. Key Advantages of This Approach

  • Robustness: It verifies the intent (the message template) rather than the final formatted string, making it resilient to changes in parameter values.
  • Readability: The test assertion _logger.VerifyLog(...) is clean, concise, and clearly states what is being tested.
  • Reusability: The extension method can be used across the entire test suite.
  • Precision: It correctly targets the specific log level and message, avoiding ambiguity.