Hybrid Quantum-Classical Deep Learning for Low-Light Object Recognition
The Challenge
Object recognition in low-illumination environments remains a critical unsolved problem in computer vision. Conventional deep learning models suffer significant accuracy degradation when applied to dark or poorly lit imagery — a limitation with direct consequences for autonomous vehicles, surveillance systems, and medical imaging. Classical approaches require large parameter counts and computationally expensive architectures to compensate, making real-world deployment impractical.
Our Approach
A hybrid classical-quantum machine learning (QML) framework was designed and implemented from scratch. The pipeline combines a classical CNN backbone for feature extraction with a Variational Quantum Circuit (VQC) classifier head, connected via amplitude encoding into a 6-qubit Hilbert space. Three benchmark datasets were used — DarkFace (6,000 samples), ExDark (5,247 samples across 12 object classes), and LOL (485 paired low/high-light images). A CLAHE-based low-light enhancement module (applied in LAB colour space) was integrated as a preprocessing stage, evaluated using PSNR and SSIM metrics against ground-truth illumination references.
What We Built
The final system is a memory-efficient, end-to-end trainable hybrid pipeline: • Classical CNN compresses 64×64 RGB images to a 64-dimensional feature vector • Amplitude encoding maps features into a 2⁶-dimensional quantum Hilbert space • A 3-layer VQC with RY/RZ rotations and ring CNOT entanglement performs classification • Pauli-Z expectation values are measured and passed to a classical output head • The full pipeline runs on consumer hardware (NVIDIA RTX 2060, 32 GB RAM) with batch-based memory management, keeping peak RAM usage under 1 GB
The Outcome
The research demonstrates that a hybrid QML model with only 36 quantum parameters achieves competitive object recognition accuracy on low-light benchmarks, with significantly fewer trainable parameters than equivalent classical architectures. An ablation study across VQC depths (1–4 layers) quantifies the contribution of quantum circuit expressibility to classification performance. Results are being prepared for journal submission.
Facing a Similar Challenge?
Every project we take on starts with understanding your specific constraints and goals. Let's talk about yours.