Generally, the mapping from RAW to RGB is from H*W*1 to H*W*3 (or equivalently from 0.5H*0.5W*4 to H*W*3).
But why the training data here is arranged as H*W*4 to H*W*3?
Thanks
Posted by: xiangyu_xu @ July 24, 2019, 10:34 p.m.You are right - the mapping should be generally from 0.5H*0.5W*4 to H*W*3, but here the size of the target images was decreased by a factor of 2 (from 4092x3018px to 2046x1509px) so that it is possible to process full images on GPU.
Posted by: andrey.ignatoff @ Aug. 4, 2019, 12:51 p.m.It is weird. Does that mean, we need to train a model to do demosaic+down-sample ...?
Posted by: yexin @ Aug. 12, 2019, 6:49 a.m.