
		<paper>
			<loc>https://jjcit.org/paper/217</loc>
			<title>CDRSHNET: VARIANCE-GUIDED MULTISCALE AND SELF-ATTENTION FUSION WITH HYBRID LOSS FUNCTION TO RESTORE TRAFFIC-SIGN IMAGES CAPTURED IN ADVERSE CONDITIONS</title>
			<doi>10.5455/jjcit.71-1699613114</doi>
			<authors>Milind Vijay Parse,Dhanya Pramod</authors>
			<keywords>Image restoration,Challenging weather conditions,Variance-guided multiscale attention,Custom loss function</keywords>
			<views>3469</views>
			<downloads>1015</downloads>
			<received_date>10-Nov.-2023</received_date>
			<revised_date>2-Jan.-2024 and 23-Jan.-2024</revised_date>
			<accepted_date>25-Jan.-2024</accepted_date>
			<abstract>In challenging weather conditions, various visual impediments such as raindrops, shadows, haze and distortions from dirty camera lenses and codec errors adversely affect the quality of traffic-sign images. Existing methods struggle to address these issues comprehensively, necessitating an innovative approach to restoration. This paper introduces the Codec Dirty Rainy Shadow Haze Network (CDRSHNet) architecture, integrating self-attention (SA) and variance-guided multiscale attention (VGMA) mechanisms. SA captures global dependencies, enabling focused processing of relevant image regions, while VGMA emphasizes informative channels and spatial locations for enhanced representation. A hybrid loss function, combining Gradient Magnitude Similarity Deviation (GMSD) and Charbonnier loss, boosts image quality. When trained on a diverse dataset, CDRSHNet attains a remarkable 99.3% restoration accuracy, yielding an average SSIM of 0.978 and an average PSNR of 39.58 on the Real Image Dataset (RID). On the Synthetic Image Dataset (SID), the average SSIM is 0.963 and the average PSNR is 39.46. The proposed model significantly improves image clarity and facilitates precise interpretation.</abstract>
		</paper>


