One of the easiest ways to ruin a shot is to try & capture the dynamic range of a scene that is outwith the dynamic capability of your camera's digital sensor. The consequence is either impenetrable, muddy shadows or featureless white patches, or both. Either way, visual information & detail is lost.
The human eye is very adaptive. If one stares at a subject without scanning the scene, the eye can detect about 14 EV (stops) of dynamic range. This is a range that the best medium format sensors are now able to achieve. But when the eye is allowed to scan & adjust it can detect in excess of 24 EV. Of course, there are dozens of articles al over the web that prove both of these claims to be wrong. The problem is that comparing the eye to a digital camera is like comparing an F1 car to a rowing boat; two forms of transport but quite different in form, function & practicality.
So if one wants to dig deep into the science, there are plenty of pedants out there, delighted to present all manner of scientific theory with one hypothesis after another. But in practice there's no need as long as one understands the general premise that they eye is simply more capable than a camera because it has the ability to adapt.
We've all noticed that by stepping outside after dark that our eyes eventually adjust to the low level of ambient light, enabling us to see details that were initially invisible. In fact our eyes are biased towards sensitivity in low light, rather than bright light, possibly as an evolutionary defence mechanism, to protect us from predators before humans were able to create artificial means of illumination. But we spend so much of our modern life in illuminated environments, we sometimes don't realise how well we can actually see, even in the darkest of conditions. It is this adaptive sensitivity that can fool us into thinking that our camera can just faithfully record what we are observing.
So we should always be aware that while cameras are usually capable of recording what our eyes can see, there are many high contrast scenes which are yet beyond their sensor capabilities. Remember, high contrast isn't limited to sun drenched beaches. A dark landscape under a full moon in view is also high contrast. Sure, one day a camera will possibly exceed the capability of human eye but for now, outside of the defence industry's classified spy satellites, humans still have the edge.
And it is this limitation that drives filter manufacturers to market their GND (graduated neutral density) filters; those rectangular sheets of shaded plastic you often see attached to the front of lenses. While it is undeniable that there are occasions when they are useful, there are so many more instances when they are simply not required.
Graduated filters became popular in the 1970s when they helped solve a problem that was inherent with the use of colour negative & positive (slide) film; the inability to selectively manipulate the exposure such as the sky once the image had been captured. Since B&W development & post processing was easily within the grasp of the home printing market, it wasn't deemed quite as necessary because one could dodge & burn as the B&W negative was projected onto the photographic emulsion to correct areas in the image that needed more or less exposure & development. In colour development & printing, this is not possible because colour papers have to be developed in complete darkness.
When digital capture & post processing software arrived, it meant that for the first time, anyone could manipulate the image after it had been captured regardless of whether it was in B&W or colour. It still didn't solve the problem of lost detail when the dynamic range of a sensor was exceeded but it did allow selective changes to be made to exposure, hue, saturation & luminosity for example, anywhere in the image.
So what is dynamic range? It is simply the range between the lightest & darkest regions in a scene. Scientifically, it is measured in candela per square metre (cd/m2) so logically, the brighter a scene, the higher the cd/m2. Of course, within almost any scene, there is a difference in cd/m2 within it. It is the difference that we see as highlights & shadows which helps create form, volume, shape, texture & distance. A scene with a high contrast has a large difference in cd/m2 while a low contrast scene has a relatively low difference in cd/m2.
The average of this difference is effectively the level at which a camera determines the correct exposure or amount of light to be received by the digital sensor after three parameters are set; ISO, shutter duration & lens aperture. But this average is not necessarily the intuitive midpoint. That's because the world which we see is invariably due to light reflecting off surfaces. But different surfaces reflect different amounts even when lit with the same amount of light. Thus, grass reflects poorly compared to say, water or snow.