We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Are the limitations to our vision like the field of view and singular focus entirely based on the limitations of the eye?
It seems like it's possible to feed an artificial signal into the brain through the optic nerve. What would happen if you fed a 360 video through such interface?
What kinds of differences in experience would this provide, regarding the ability to focus on particular object for example? Seems like moving the eye would not be substantial anymore (no need for optical focusing), which introduces another curious situation on it's own. Would the brain be able to adapt to operate with multiple (mental-visual) focus points in such setting?
My understanding is that the eyes are actually quite poor in terms of providing the brain with a clean image. Perhaps the only areas where eyes are better than cameras are with simultaneously viewing light and dark together (high-contrast), being quite fast at focusing, and being able to see reasonably in low-light. Each eye has a full-on blind spot, and the ability to see colours in the periphery is quite limited. Almost everything about our clean visual perception is a result of the brain's compensatory mechanisms -- such as filling in holes and choosing colours based on memories.
Although an unscientific perspective, when I dream, I can see clearly not only in the centre but also in my periphery. Moreover, my vision does not have the static-like overlay in my dreams like it does in real life. (I am often consciously aware when I am dreaming, wherein I have done various tests in my dreams) Hence, my guess is that the brain is capable of handling a fair bit more data than the eyes provide.
When you consider all the people born with abnormalities, and when you consider the array of change that your body goes through during development all the way through old age, it should be capable of handling all sorts of unnatural situations. Even looking at a computer screen and understanding the idea of windows being behind other windows and the idea of a webpage being like a long scroll for which you can see only a part of it are rather unnatural. Many of those brought up before the time of computers have a great deal of difficulty understanding the virtual realm that resides inside the screen (of computers, smartphones, and other modern gadgets).
The concentration of receptors is vastly greater near the centre of visual fixation. While I doubt the brain could handle seeing in full definition in the periphery, it could probably handle quite a bit more than it currently does. I do believe the brain could handle mentally focusing on multiple regions simultaneously. I can focus mentally on three or four areas of my periphery at once, despite the image being quite unclear. At the same time, the brain has limited capacity. For example, they say that the human brain can handle tracking up to 60 moving objects within something like 30deg of visual field. If we could see all the way around us at once, I imagine it would be quite demanding of our attention -- but I am sure we could grow to become accustomed to it.
The distinction that you draw between "instrument"/eye and brain is not particularly clear, and to some people it makes more sense to think of the eye as part of the brain.
Regardless, let me try and summarise some visual limitations and their causes. Our visual acuity and contrast sensitivity are limited by the optics of the eye (e.g., the lens), the arrangement of photoreceptors on the retina, and the centre-surround organisation of receptive fields (in the retina, but also in V1 and other parts of the visual brain). Some good sources for this are: http://webvision.med.utah.edu/book/part-viii-gabac-receptors/visual-acuity/ https://foundationsofvision.stanford.edu/chapter-6-the-cortical-representation/ (the second of these, Wandell's book, has plenty about the organisation of visual cortex).
As I think you are hinting, we also move our eyes (fixations and saccades), so that we process a sequence of locations at high (foveal) spatial resolution. I suppose you could also look at this as a limitation (though it is one that is shared by almost all species, which is quite interesting in itself).
Answers to the rest of your question are obviously going to be speculative. On the one hand, the brain is quite plastic and shows (sometimes amazing) re-organisation if it is given different visual inputs (for example there is evidence that the visual cortex of blind people takes on different functions). On the other hand, I find it very unlikely that our visual cortex would cope with the increase in information that you are suggesting, or that it would allow us to behave any differently. It would mean a dramatic re-organisation of the way it is currently set up (which reflects the receptive fields and the field of view we normally have). Bear in mind that there are far fewer responding neurons in V1 than there are photoreceptors in the retina, so the brain is already filtering and compressing the information. There simply wouldn't be enough to perform the same computations across a far wider area, though I suppose we might have poorer resolution over a wider area. As for the other point and the answer about "mentally focusing", in natural conditions both fixation and covert attention tend to be on a single location. This probably has numerous benefits (e.g., mapping coordinates in space so that we can plan actions) and a much wider field or multiple pointers would probably be less efficient.