Ever since Einstein we’ve been able to say that humans ‘see’ because information about the scene is carried to the eyes by photons reflected by it. So when we talk about Information in photography we are referring to information about the energy and distribution of photons arriving from the scene. The more complete this information, the better we ‘see’. No photons = no information = no see; few photons = little information = see poorly = poor IQ; more photons = more information = see better = better IQ.
Sensors in digital cameras work similarly, their output ideally being the energy and location of every photon incident on them during Exposure. That’s the full information ideally required to recreate an exact image of the original scene for the human visual system, no more and no less. In practice however we lose some of this information along the way during sensing, so we need to settle for approximate location and energy – in the form of photoelectron counts by pixels of finite area, often correlated to a color filter array.