Beliefs are a large part of how the mind functions since they help form desires and drives. How could one carve out the mental faculties and processes responsible for belief formation and revision? Here is (Goldman, A):
- An initial phase of this undertaking is to sharpen our conceptualization of the types of cognitive units that should be targets of epistemic evaluation. Lay people are pretty vague about the the sorts of entites that quality as intellectual virtues or vices. In my description of epistemic folkways, I have been deliberately indefinite about these entities, calling them variously "faculties," "processes," "mechanisms," and the like. How should systematic epistemology improve on this score?
- A first possibility, enshrined in the practice of historical philosophers, is to take the relevant units to be cognitive faculties. This might be translated into modern parlance as modules, except that this term has assumed a rather narrow, specialized meaning under Jerry Fodor's (1983) influential treatment of modularity. A better translation might be (cognitive) systems e.g., the visual system, long-term memory, and so forth. Such systems, however, are also suboptimal candidates for units of epistemic analysis. Many beliefs are the outputs of two or more systems working in tandem. For example, a belief consisting in the visual classification of an object ("That is a chair") may involve matching some information in the visual system with a category stored in long-term memory. A preferable unit of analysis, then, might be a process, construed as the sort of entity depicted by familiar flow charts of cognitive activity. This sort of entity depicted by familiar flow charts of cognitive activity. This sort of diagram depicts a sequence of operations (or sets of parallel operations), ultimately culminating in a belief -like output. Such a sequence may span several cognitive systems. This is the sort of entity I had in mind in previous publications (especially Goldman 1986) when I spoke of "cognitive processes."
- Even this sort of entity, however, is not fully a satisfactory unit of analysis. Visual classification, for example, may occur under a variety of degraded conditions. The stimulus may be viewed from an unusual orientation; it may be partly occluded, so that only certain of its parts are visible; and so forth. Obviously, these factors can make a big difference to the reliability of the classification process. Yet it is one and the same process that analyzes the stimulus data and comes to a perceptual "conclusion." So the same process can have different degrees of reliability depending on a variety of parameter values. For purposes of epistemic assessment, it would be instructive to identify the parameters and parameter values that are critically relevant to degrees of reliability. The virtues and vices might then be associated not with processes per se, but with processes operating with specified parameter value.
So various mental faculties might be responsible for belief formation like memory and vision. I would think that emotional processes also would obviously be responsible as well (as beliefs are emotional). Unconscious or conscious processes could help form beliefs, and that in turn could determine what the persons goals and drives are like.
How does the mind process sensory inputs? Sensory experiences in the mind have the label 'qualia' (Kim, J):
- Sensations have characteristic qualitative features; these are called "phenomenal" or "phenomenological" or "sensory" qualities-"qualia" is now the standard term. Seeing a ripe tomato has a certain distinctive sensory quality that is unmistakably different from the sensory quality involved in see a bunch of spinach leaves. We are familiar with the smells of roses and ammonia; we can tell the sound of a drum from that of a gong; the feel of a cool, smooth granite countertop as we run our fingers over it is distinctively different from the feel of sandpaper. Our waking life is a continuous fast of qualia- colors, smells, sounds and all the rest. When we are temporarily unable to taste or smell properly because of a bad cold, eating a favorite food can be like chewing cardboard and we are made acutely aware of what is missing from our experience.
How do these sensory qualities determine how we feel overall? Does the physical match up with the mental? (Kim, J):
- On the functionalist account, mental states are realized by the internal physical states of the psychological subject; so for humans, the experience of red, as a mental state, is realized by a specific neural state. This means that you and I cannot differ in respect of the qualia we experience as long as we are in the same neural state; given that both you and I are in the same neural state, something that is in principle ascertainable by observation, either both of us experience red or neither does.
So some aspects of mental states are physical and some are mental - here is another quote from the same author (Kim, J):
- In any case, is seems plausible that there are conscious mental states with no special phenomenal character. In general, mental occurrences that we call "experiences" appear to be those that possess phenomenal properties. Sensing and perceiving are experiences, but we do not think of believing and thinking as experiences. If this is so, the idea of phenomenal character and the idea of there being something it is like may come apart, though only slightly. For it certainly seems that there is something it is like to believe something, to suspend judgment about something, to wonder about something, or to hope for something. But as we saw, at least many instances of these states do not seem to have any phenomenal character.
How does someone know when they are conscious of something or in a conscious state? A good way to answer that would be to compare animals to humans, as that might illustrate how humans are more conscious. - Can you attribute intentionality without attributing consciousness? Here (Gennaro, R) asks that question:
- Can significant explanatory power be achieved by making intentional attributions without attributions of consciousness? It seems to me that the answer is clearly yes, as the animals' case in the previous paragraph shows. We would, I suggest, still rightly attribute all unconscious intentional states to such animals. would or should we withdraw intentional attributions to an animal if we later come to agree that it is not conscious? I don't think so. Such attributions are useful in explaining and predicting animal behavior, but it does not follow that they have merely "as-if" intentionality. In some cases, we may not know if they are conscious. The same i suggest, would hold for advanced robots. This is not necessarily to embrace some kind of antirealist Dennetean "intentional stance" position (Dennett 1987). For one thing, we might still agree that those systems have genuine internal mental representations.
I would say that animals have perceptions or even higher-order perceptions (HOP) but don't have thoughts or higher-order thoughts (HOT). A perception or thought is higher order when it takes another perception or thought as its object - such as you being aware of your thought or perception on a certain thing. Animals might have thoughts or perceptions then, but probably not higher order ones since they are basically functioning unconsciously if you were to compare them to humans.
You could say that animals don't really have 'conscious' thoughts since they don't think about what they are thinking about. They don't really have higher-level thoughts since they just have simplistic thoughts or thoughts that don't involve complex representations (or they don't make the representations complex).
For instance when someone thinks 'I just did this' then they are thinking more consciously about what they did and the thoughts that were involved. That enables further action or introspection that animals don't have.