earticle

논문검색

Culture Information Technology (CIT)

Super-Resolution with Variable-sized Block using Implicit Neural Representation

초록

영어

Recently, Implicit Neural Representations (INRs) have been gaining attention as an approach where neural networks learn a continuous function that takes coordinates as input and outputs the color values at those locations. Using INRs allows for reconstructing images of any size without constraints on spatial resolution. As a result, it has emerged as a promising method for super resolution, enabling a single neural network to represent images at all resolutions. However, existing research on INR-based super-resolution still lags behind other deep learning methods in terms of performance. This is because a single neural network, which takes uniform coordinate values as input, faces challenges in representing information of varying complexity across different regions of an image. Therefore, we propose a method to improve super-resolution performance by decomposing an image into variable-sized blocks so that each block has uniform complexity, regardless of the regional variation in complexity. The INR neural network then learns the image information of each block with uniform complexity. By alleviating differences in regional complexity, the neural network is able to learn regional information more stably and accurately, enabling optimal performance even in areas with diverse levels of complexity.

목차

Abstract
1. Introduction
2. Implicit Neural Representation based Signal Processing
3. Proposed Method
4. Experimental Results
5. Conclusion
Acknowledgement
References

저자정보

  • HoonJae Lee Professor, Dept. Information Security, Dongseo University, Korea
  • Young Sil Lee Professor, Dept. Computer Science, International College, Dongseo University, Korea
  • Suk-Ho Lee Professor, Dept. Computer Engineering, Dongseo University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.