404 CTF - Edition 2024

The 404 CTF is France’s largest cybersecurity competition. It is organized jointly by the DGSE, Télécom SudParis, HackademINT, OVHcloud and Viva Technology. The 2024 edition took place from April 20 to May 12 2024 and gathered about 3800 competitors around 72 challenges designed by HackademINT. Learn more on the 404 CTF’s website or on the repository.

Previous editions
2022 | 2023



Introduction

For the 2024 edition, I’ve created Quantum and AI challenges!

I started with an introduction to quantum algorithms based on the framework of the French start-up Quandela, which uses photonics. This was particularly well suited to talking about Quantum Key Distribution, a quantum communication protocol that is very interesting from a cyber security point of view. I also tested reverse engineering quantum circuits in the latest challenge.

I then took a specific area of AI attacks, model poisoning, and decided to turn it into a suite of challenges. Full solutions are available in the GitHub repository.

Challenges

Quantum Algorithms

Introduction to rail encoding: The quantum information carried by a qubit can be represented in various ways. Here, we use the photonic quantum computer model, which operates with photons and optical hardware. Challenge / Solution

Quantum Woman in the Middle: Alice and Bob decide to exchange a secret key using the BB84 protocol. Being confident in the reliability of their protocol, they tolerate some noise. However, Eve manages to intercept the communication channel. Will she succeed in going unnoticed? Challenge / Solution

Multiple Systems: 2 qubits is just 2x2 rails, right? Challenge / Solution

Reverse Engineering: Two circuits with missing parts, will it be possible to find the missing parts given the expected output? Challenge / Solution

Artificial Intelligence

Poison [1/2]: Discover federeated learning and poison a federated learning agregator with no defenses. Challenge / Solution

Poison [2/2]: This time, a defense mechanism has been implemented. It aims to prevent any single client from having too much influence by enforcing a maximum variation on the weights. But is that enough? Challenge / Solution

Backdoor: The goal of this challenge is to exploit the vulnerabilities of federated learning to place a backdoor in the model. Challenge / Solution

Poison [3/2]: Weak neural network can be poisonned by flipping only 2 weights. Challenge / Solution