Featured image of post Puzzle 3 - L3akCTF 2025

Puzzle 3 - L3akCTF 2025

So, you saw how we handled the last puzzle. It was flashy, watching the browser solve itself was a neat party trick, but it’s hella inefficient. Relying on Selenium to brute-force the DOM with clicks felt like using a sledgehammer for brain surgery. For this next stage, the puzzles got harder and the timer got drastically shorter. It was time to evolve.

As hinted before, we knew there was a backend API. It was time to stop playing with the frontend puppet and start pulling the strings directly.

API Abuse, In a cool way

Why scrape the HTML when you can just ask the server for the data directly? A quick look at the frontend javascript code revealed the endpoints we needed: /api/newpuzzle and /api/checkanswer.

The new methodology is surgical and stripped of all excess:

  1. Direct Requisition: Send a GET request to /api/newpuzzle. The server hands over a JSON object containing everything: puzzle dimensions, the hint URL, and a list of all the scrambled pieces, pre-encoded in Base64. No more parsing the CSS bullshit (thank God šŸ˜­šŸ™)

  2. Headless Hinting: The hint URL still points to the original image. We still need to resolve it. For this, Selenium is brought out of retirement to navigate to the URL, grab the final image source, and immediately shut down.

  3. Reference Generation: This remains the same. Download the full-resolution image, perform some calculations to crop and resize it to match the puzzle’s true dimensions, and slice it into a grid of perfect reference tiles.

  4. The Toolbox: This is where things get interesting. Without having access to the source code (so far ;)), we don’t know the exact preprocessing the reference image went through before being sliced into tiles. This became evident with relentless testing that revealed subtle image variations—compression artifacts and slight color and canvas shifts. These were present in the previous puzzle too, but with the pieces being even smaller in size, a single offset pixel will fool the template matching algorithm from earlier. A single strategy was no longer reliable, so why not adopt multiple of them, escalating in intensity:

    • Attempt 1: Pixel Difference. A direct, pixel-by-pixel cv2.absdiff comparison.
    • Attempt 2: Perceptual Hashing. If pixel-perfect fails, switch to imagehash. This ignores minor artifacts and compares the “fingerprints” of the images.
    • Attempt 3 & 4: The Nuclear Option. If both fail, we re-run them, but first apply a heavy GaussianBlur to both the reference and current tiles. This blurs out fine details and noise, forcing the match to be based on the general color and shape of the tiles. It’s ugly, but it works when nothing else will.
  5. The Verdict: Once a match is found, we don’t need to simulate clicks. We construct a JSON payload with the puzzle ID and the answer—a simple list of integers representing the correct final positions of the initial pieces—and POST it directly to /api/checkanswer.

After submitting a solution, we can directly go onto the next puzzle, reducing the overhead time and drastically speeding up the solver.

Technical Dive

The core of the new solver is its ability to adapt its matching strategy. We cycle through four methods until one yields a correct solution from the API.

1
2
3
4
5
6
# ...
    for m in range(4):
        method = diff_check if m % 2 == 0 else hash_check
        preprocess = m >= 2
        logger.info(f"Using method: {method.__name__} with preprocess={preprocess}")
# ...

This loop ensures that if a simple pixel difference fails, we escalate to perceptual hashing, and then to pre-processed (blurred) versions of both. This resilience was key to achieving a 100% solve rate against the varied puzzle sets.

The Final Submission

Gone are the ActionChains. The final step is a clean, simple API call. After linear_sum_assignment gives us the optimal mapping, we format it into the list the API expects and send it off.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# `answer` is a list where answer[final_pos] = original_pos
answer = [0] * n
for i in range(n):
    final_position = col_ind[i]
    original_index = row_ind[i]
    answer[final_position] = original_index

payload = {
    'puzzle_id': ID,
    'answer': [int(x) for x in answer]
}
logger.info("Submitting final answer to the API...")
response = session.post(f"{BASE_URL}/api/checkanswer", json=payload)

This new approach is brutally efficient. Each puzzle is now solved in seconds, bottlenecked only by the download speed of the source image. The diff_check method is the fastest one, usually clocking in at 5 seconds max to map each tile, making each puzzle take at most 20 seconds to breeze through.

Script

Click to expand solver code
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
import os
import time
import requests
import cv2
import numpy as np
from PIL import Image
from io import BytesIO
import base64
from tqdm import trange
import imagehash
from scipy.optimize import linear_sum_assignment
import logging
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from collections import Counter

# --- Configuration ---
BASE_URL = 'http://34.55.69.223:14001'
cookie = {
    'name': 'session',
    'value': 'mxyALe2gmk2lAYwX7PjP3YBxr3VW63sCu6RLnlOYuoNF7vzAQ_ySSycSOFe5spkv2mj70iqGfc2NbcWbuG6DggbME01xAQbSGGIZ6QV9UVESSNGNRneQqaUrwyz5yNN6nI9MKSoe-xYcSEAMHu0QbUsEqMTAfETN4QVuSxbMhjI='
}
SESSION_COOKIE = cookie['value']
PUZZLE_URL = 'http://34.55.69.223:14001/puzzle'

logging.basicConfig(format='BIT> %(message)s')
logger = logging.getLogger()
logger.setLevel(logging.INFO)
options = webdriver.FirefoxOptions()


def phash(tile):
    pil_img = Image.fromarray(cv2.cvtColor(tile, cv2.COLOR_BGR2RGB))
    return imagehash.phash(pil_img)

def slice_grid(img, rows, cols):
    h, w = img.shape[:2]
    slice_h, slice_w = h // rows, w // cols
    return [img[i*slice_h:(i+1)*slice_h, j*slice_w:(j+1)*slice_w] for i in range(rows) for j in range(cols)]

def preprocess_for_robust_matching(image):
    return cv2.GaussianBlur(image, (21, 21), 0)

def get_border_color(tiles):
        border_pixels = []
        for tile in tiles:
            h, w, _ = tile.shape
            if h > 1 and w > 1:
                border_pixels.extend([tuple(p) for p in tile[0, :, :]])
                border_pixels.extend([tuple(p) for p in tile[h-1, :, :]])
                border_pixels.extend([tuple(p) for p in tile[1:h-1, 0, :]])
                border_pixels.extend([tuple(p) for p in tile[1:h-1, w-1, :]])

        if not border_pixels:
            return [0, 0, 0]

        most_common_pixel = Counter(border_pixels).most_common(1)[0][0]
        return list(most_common_pixel)

def hash_check(cur_tiles, ref_tiles, preprocess=False):
    if preprocess:
        cur_tiles = [preprocess_for_robust_matching(tile) for tile in cur_tiles]
        ref_tiles = [preprocess_for_robust_matching(tile) for tile in ref_tiles]

    hashes_ref = [phash(tile) for tile in ref_tiles]
    hashes_cur = [phash(tile) for tile in cur_tiles]

    n = len(cur_tiles)
    cost_matrix = np.zeros((n, n))
    for i in trange(n):
        for j in range(n):
            cost = hashes_ref[j] - hashes_cur[i]
            cost_matrix[i, j] = cost

    row_ind, col_ind = linear_sum_assignment(cost_matrix)
    return row_ind, col_ind

def diff_check(cur_tiles, ref_tiles, preprocess=False):
    if preprocess:
        cur_tiles = [preprocess_for_robust_matching(tile) for tile in cur_tiles]
        ref_tiles = [preprocess_for_robust_matching(tile) for tile in ref_tiles]

    n = len(cur_tiles)
    cost_matrix = np.zeros((n, n))

    for i in trange(n):
        for j in range(n):
            abs_diff = cv2.absdiff(cur_tiles[i], ref_tiles[j])
            cost = np.sum(abs_diff)
            cost_matrix[i, j] = cost

    row_ind, col_ind = linear_sum_assignment(cost_matrix)
    return row_ind, col_ind

session = requests.Session()
session.cookies.set('session', SESSION_COOKIE)
deebooged = True
while True:
    for m in range(4):
        method = diff_check if m % 2 == 0 else hash_check
        preprocess = m >= 2
        driver = webdriver.Firefox(options=options)
        logger.info(f"Using method: {method.__name__} with preprocess={preprocess}")
        logger.info("Requesting a new puzzle from the API...")
        try:
            response = session.get(f"{BASE_URL}/api/newpuzzle")
            response.raise_for_status()
            data = response.json()
            logger.info(f"Successfully received puzzle: '{data['title']}' by {data['artist']}")
        except requests.exceptions.RequestException as e:
            logger.error(f"Failed to get new puzzle: {e}")
            break

        ID = data['puzzle_id']
        GRID_ROWS = data['rows']
        GRID_COLS = data['cols']
        WIDTH = data['width']
        HEIGHT = data['height']
        FULL_IMAGE_URL = data['url'] #.replace("name=small", "name=4096x4096")

        if data['artist'] == 'ZinFyu':
            FULL_IMAGE_URL = "https://furrycdn.org/img/view/2024/5/24/335604.jpg"
        else:
            driver.get(FULL_IMAGE_URL)

            logger.info("On Twitter page, finding the image...")
            image_element = WebDriverWait(driver, 20).until(
                EC.presence_of_element_located((By.CSS_SELECTOR, 'img[alt="Image"]'))
            )
            FULL_IMAGE_URL = image_element.get_attribute('src')
            if not FULL_IMAGE_URL:
                raise ValueError("Found the image element but it has no src.")
            FULL_IMAGE_URL = FULL_IMAGE_URL.replace("name=small", "name=4096x4096")
            logger.info(f"Found full image URL: {FULL_IMAGE_URL}")

            driver.quit()
        logger.info("Switched back to the puzzle page.")

        cur_tiles_b64 = data['pieces']
        cur_tiles = []
        for b64_string in cur_tiles_b64:
            img_data = base64.b64decode(b64_string)
            img_np = np.frombuffer(img_data, np.uint8)
            img = cv2.imdecode(img_np, cv2.IMREAD_COLOR)
            cur_tiles.append(img)

        logger.info(f"Grid: {GRID_ROWS}x{GRID_COLS}. Decoded {len(cur_tiles)} puzzle pieces.")

        try:
            response = requests.get(FULL_IMAGE_URL)
            response.raise_for_status()
            full_img_pil = Image.open(BytesIO(response.content))
            full_img = cv2.cvtColor(np.array(full_img_pil), cv2.COLOR_RGB2BGR)
            logger.info(f"Full image downloaded from: {FULL_IMAGE_URL}")
        except requests.exceptions.RequestException as e:
            logger.error(f"Failed to download full image: {e}")
            continue

        border_size = 5
        border_color = get_border_color(cur_tiles)

        target_h, target_w = GRID_ROWS * HEIGHT-2*border_size, GRID_COLS * WIDTH-2*border_size
        current_h, current_w = full_img.shape[:2]
        logger.info(f"Original full image dimensions: {current_w}x{current_h} pixels.")
        logger.info(f"Target full image dimensions: {target_w}x{target_h} pixels.")

        full_img_resized = full_img[0:target_h, 0:target_w]
        #full_img_resized=cv2.resize(full_img_resized, (target_w, target_h), interpolation=cv2.INTER_AREA)
        full_img_bordered = cv2.copyMakeBorder(full_img_resized, border_size, border_size, border_size, border_size, cv2.BORDER_CONSTANT, value=[int(c) for c in border_color])

        ref_tiles = slice_grid(full_img_bordered, GRID_ROWS, GRID_COLS)
        logger.info(f"Reference image processed and sliced into {len(ref_tiles)} tiles.")
        n = len(cur_tiles)

        row_ind, col_ind = method(cur_tiles, ref_tiles, preprocess)
        piece_locations = {ref_idx: cur_idx for cur_idx, ref_idx in zip(row_ind, col_ind)}

        answer = [0] * n
        for i in range(n):
            final_position = col_ind[i]
            original_index = row_ind[i]
            answer[final_position] = original_index
        answer = [int(x) for x in answer]
        logger.info(f"Calculated final piece arrangement")
        if deebooged:
                logger.info("Debugging mode: Saving matched images for inspection.")
                deebooged = False
                debug_dir = "debug_matches"
                os.makedirs(debug_dir, exist_ok=True)

                logger.info(f"Saving matched images to '{debug_dir}/' for debugging...")
                for ref_idx, cur_idx in piece_locations.items():
                    ref_img_to_save = ref_tiles[ref_idx]
                    cur_img_to_save = cur_tiles[cur_idx]
                    h, w, _ = cur_img_to_save.shape
                    if ref_img_to_save.shape[0] != h or ref_img_to_save.shape[1] != w:
                        logging.warning(f"Resizing reference image {ref_idx} to match current image dimensions.")
                        logging.warning(f"Reference image shape: {ref_img_to_save.shape}, Current image shape: {cur_img_to_save.shape}")
                        ref_img_to_save = cv2.resize(ref_img_to_save, (w, h), interpolation=cv2.INTER_AREA)
                    abs_img_to_save = cv2.absdiff(ref_img_to_save, cur_img_to_save)

                    combined_img = np.concatenate((ref_img_to_save, cur_img_to_save,abs_img_to_save), axis=1)

                    cv2.imwrite(os.path.join(debug_dir, f"match_{ref_idx:03d}_(ref_vs_cur).png"), combined_img)

        payload = {
            'puzzle_id': ID,
            'answer': answer
        }
        logger.info("Submitting final answer to the API...")
        try:
            response = session.post(f"{BASE_URL}/api/checkanswer", json=payload)
            response.raise_for_status()
            result = response.json()

            if result.get('correct'):
                logger.info(f"šŸŽ‰ Puzzle Solved! Message: {result.get('winmessage', 'Success!')}")
                break
            else:
                logger.error("Puzzle not solved. The API reported an incorrect answer.")

        except requests.exceptions.RequestException as e:
            logger.error(f"Failed to submit answer: {e}")
            deebooged = True
    if  m == 4:
        logger.info("All methods exhausted without solving the puzzle. Restarting...")
        exit()

    logger.info("--- Puzzle attempt complete. Starting next puzzle... ---")
    time.sleep(2)

It also important to note that, for some odd reason, one of the images is slightly offset in position. I had to modify the script just for that particular puzzle

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
offset = 10
target_h_crop = GRID_ROWS * HEIGHT - (2 * border_size)
target_w_crop = GRID_COLS * WIDTH - (2 * border_size)
target_h = target_h_crop + 2 * border_size
target_w = target_w_crop + 2 * border_size+ offset*2
full_img_cropped = cv2.resize(full_img, (target_w, target_h), interpolation=cv2.INTER_AREA)

full_img_cropped = full_img_cropped[0:target_h_crop, 0:target_w_crop]
full_img_bordered = cv2.copyMakeBorder(full_img_cropped, border_size, border_size, border_size, border_size, cv2.BORDER_CONSTANT, value=[int(c) for c in border_color])
ref_tiles = slice_grid(full_img_bordered, GRID_ROWS, GRID_COLS)

Also, since this methode is entirely done without a GUI, we didn’t get the chance to admire the *cough, cough* gorgeous furry art xd, BUT YOU WILL SEE IT !

BEHOLD… FURRIES