원문정보
초록
영어
The color space transformation to link device-dependent color spaces and device-independent color spaces is essential for device characterization and cross-media color reproduction. There are various color conversion methods such as regression, 3D interpolation with LUT(look-up table), and neural network. In the color transformation with these methods, the conversion accuracy is essentially based on the sample data to be exploited for device characterization. In conventional method, color samples are uniformly selected in device-dependent space such as CMY and RGB. However, distribution of these color samples is very non-uniform in device-independent color space such as CIEL*a*b*. Accordingly, the conversion error in device-independent color space is irregular according to the distribution of the samples. In this paper, a color sampling method based on equi-visual perception is proposed to obtain approximate uniform color samples in CIEL*a*b* space. In order to evaluate transformation accuracy of proposed method, color space transformations are simulated using regression, 3D interpolation with LUT and neural network techniques, respectively.
목차
1. 서론
2. 등시지각에 기반한 색 샘플링
2.1. 등시지각 특성
3. LUT를 이용한 보간
3.1. CIELab-CMY 변환
3.2. 무게중심적 보간(Barycentnic interpolation)
4. 신경망을 이용한 색변환
4.1. 신경망 학습
5. 회귀모델에 의한 방법
6. 실험 및 고찰
7. 결론
참고문헌