Deep Learning for Image Improvement in Low Light
Keywords:
Global data, Global spatial attention, Multi-granular dense blocks (MGDBs), Network depth, Neural networkAbstract
In recent years, deep convolutional neural
networks have shown remarkable success in
enhancing low-light images. These
approaches typically focus on improving
feature extraction by increasing the network
depth and complexity. However, there is a
need for faster inference while preserving
both local and global information. To address
this, we propose a novel approach called
ABSGN (Attention-based Broadly Self-guided
Network), inspired by SGN (Self-guided
Network), for real-world low-light image
enhancement.
ABSGN utilizes a top-down, self-guiding
architecture that effectively integrates multiscale data and extracts valuable local features
to restore clear images. Notably, this
approach requires fewer parameters
compared to UNet-like structures, while
maintaining superior effectiveness. To
enhance the network's performance, we
introduce Multi-Granular Dense Blocks
(MGDBs) as a novel extension of dense blocks
in the feature space. These modules play a
crucial role in effectively extracting global
data, referred to as global spatial attention,
which helps in achieving better results and
managing noise across various exposure
levels.
Our comprehensive approach has been
thoroughly evaluated on widely used
benchmarks and outperforms the majority of
state-of-the-art low-light image enhancement
methods, as confirmed by additional test
results.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Journal of IoT Security and Smart Technologies (e-ISSN:2583-6226)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.