The most important component of a digital camera is the image sensor. Nowadays CCD and CMOS sensors monopolise the digital market. Those sensors capture a continuous signal—the light—which is converted through a filtering process into discrete signals. Within this process, simplified here in one line, the element that controls the white balance level is key. The same scenario under the same conditions but captured using different cameras, even with the same settings, may yield different images (luminance, colour, etc.) due in part to the white balance corrector. This problem is a known fact in different fields (Computer Vision, Photography or Painting). For instance, in object recognition, when a classifier has been trained using a set of images from one specific camera, and we want to evaluate other images that have been captured with a different camera using the same classifier, the performance obtained by the classifiers usually drops. Basically, the images used during training and the ones used during testing are no longer the same, even if the content of both sets of images was practically the same (background and foreground). In the literature, this problem can be addressed using techniques such as active learning, where the classifier is retrained using a few new images coming from the new camera. However, this issue does not only come from using different cameras, but also different scenarios. If we were training a human classifier using people captured in winter and testing it using images acquired during the summer, we would probably be facing the same problem.
For humans, our eyes are our image sensors, although our brains are far more complex than the software and hardware used by cameras. Similar to these, we also face one problem which is white balancing. However, differing from cameras, our brains have the capacity of interpreting an image using previous knowledge. Hence, we may have a different perception of the same image we saw just a few minutes ago just because our brain analysed it in a different way. This fascinating fact together with the madness of Internet became viral not long ago.
The contest was known as “White and gold or blue and black dress?” I am sure you remember it. Some people wrote blue and black while others saw it in white and gold. Some people on the Internet analysed the image by getting the pixel colour of each part of the dress. Others were arguing over the fact that it could be perceived in a different colour. This phenomenon went viral in a matter of a few hours. Although some posts explained the reasons behind these different perceptions, people continued arguing.
Is this the only example we can find where perception can play with our mind? Of course not! There many examples, just do a quick search on the Internet, you will find plenty of them.
In conclusion, there are many factors—visual experience, neural system, information coming from other sense organs—that lead us as humans to interpret a simple picture differently. Thus, there is no need for arguments but fascination. As for robots, unless there is a processing stage involved which tries to emulate the human brain, a pixel colour or intensity will remain always the same.