Skip to main content

Cognitive Psychology: Chapter 1: Introduction to Cognitive Psychology and Distinctions Cognitive Psychologists Make

Cognitive Psychology
Chapter 1: Introduction to Cognitive Psychology and Distinctions Cognitive Psychologists Make
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeCognitive Psychology
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Front Matter
  2. Preface
  3. Acknowledgements
  4. Chapter 1: Introduction to Cognitive Psychology and Distinctions Cognitive Psychologists Make
  5. Chapter 2: Sensory Memory
  6. Chapter 3: Pattern Recognition (words, objects, and faces)
  7. Chapter 4: Attention
  8. Chapter 5: Short-term Memory and Working Memory
  9. Chapter 6: Introduction to Episodic Long-Term Memory
  10. Chapter 7: Semantic Memory
  11. Chapter 8: LTM in Natural Settings: Interactions between Semantic and Episodic Long-Term Memory
  12. Chapter 9: Language

Chapter 1: Introduction to Cognitive Psychology and Distinctions Cognitive Psychologists Make

Welcome to Cognitive Psychology. This chapter aims to provide an introductory understanding of cognitive psychology, its scope, and its distinctions from other psychological approaches. Cognitive psychology is a field that focuses on understanding mental processes such as perception, learning, memory, decision-making, and language. It employs scientific methods to study these processes and formulate theories about how the mind works.

The Nature of Cognitive Psychology

What is Cognition?

Cognition refers to the mental processes involved in gaining knowledge and comprehension. These processes include attention and perceiving, thinking, knowing, remembering, judging, and problem-solving. Cognitive psychology studies these processes scientifically (i.e., using the scientific method) to understand how we perceive, think, and remember information.

Key Areas of Cognitive Psychology

  • Perception: How we interpret sensory information to understand our environment.
  • Attention: How we focus on specific stimuli while ignoring others.
  • Memory: The processes involved in storing, retaining, and recalling information.
  • Language: The cognitive processes involved in understanding and producing language.
  • Decision Making: How we make choices and judgments.

The Scientific Study of Cognition

The scientific method is integral to studying mental processes like attention and memory, despite them being internal and not directly observable. Here’s how it applies:

  1. Observation: Scientists observe behavior and gather data through controlled experiments or naturalistic observations. For instance, they may observe how people react to stimuli under different conditions to make inferences about attention, memory, etc.
  2. Hypothesis Formation: Based on observations and existing theories, researchers formulate hypotheses about mental processes. These hypotheses are testable (i.e., falsifiable) predictions about how attention or memory might function.
  3. Experimentation: Researchers design experiments to test these hypotheses. These experiments manipulate variables (like types of stimuli or task difficulty) and measure outcomes (like reaction times or accuracy) to draw conclusions about mental processes. The variables that are manipulated are called the independent variables and the variables that are measured are the dependent variables.
  4. Data Analysis: Data collected from experiments are analyzed using statistical methods to determine if results support or refute the hypotheses.
  5. Theory Building: Consistent findings across studies contribute to the development of theories that explain how attention, memory, and other mental processes work. These theories are refined and revised based on new evidence and challenges.

Despite the inability to directly observe mental phenomena like attention or memory, the scientific method allows psychologists to infer and understand these processes through systematic observation, experimentation, and theory building.

Historical Context

One significant figure in the history of psychology is B.F. Skinner, a founder of behaviorism. Skinner and other behaviorists argued that it is unscientific to study mental processes because they are not directly observable. They believed that psychology should focus on observable behavior and the ways it is learned.

However, cognitive psychology emerged as a response to the limitations of behaviorism. While behaviorism could explain certain learned behaviors, it struggled to account for more complex cognitive functions like language acquisition, problem-solving, and memory. As just one example, certain behaviors are easier to learn when they align with existing cognitive structures, such as learning stove knobs that correspond spatially to burner locations relative to learning knobs that do not have a consistent organization. Behaviorism struggles to explain these differences solely through reinforcement principles, as it does not account for how stimulus-response associations are affected by cognitive organization and mental maps.

Cognitive Psychology vs. Behaviorism

Behaviorism focuses on observable behaviors and the ways they are learned through associations and reinforcement. Cognitive psychology, on the other hand, studies mental processes and how they influence behavior. While behaviorists like Skinner dismissed the study of the mind as unscientific, cognitive psychologists argue that it is possible to infer mental processes from observable behavior, much like scientists in other fields infer the existence of phenomena they cannot directly observe (e.g., continental drift, sub-atomic particles, black holes, and the origins of the universe, as just a few examples).

Assumptions of Cognitive Psychology

Cognitive psychologists make several key assumptions:

  1. Mental Processes Exist: Processes such as attention, memory, and perception are real and can be studied.
  2. Mental Processes Take Time: These processes are not instantaneous and require time to operate.
  3. Scientific Study is Possible: Mental processes can be studied using scientific methods, including experiments and empirical observation.

Methodologies in Cognitive Psychology

Experimental Approach

Cognitive psychology heavily relies on experiments to study mental processes. Researchers manipulate independent variables (IVs) and measure their effects on dependent variables (DVs). Common DVs in cognitive psychology include reaction times, accuracy, and brain imaging data.

Common Dependent Variables

  1. Reaction Time: The time it takes for a participant to respond to a stimulus.
  2. Accuracy: The correctness of a participant's response.
  3. Brain Imaging: Techniques that allow researchers to observe brain activity. This course will focus on the first two DVs while brain-imaging data are discussed in greater detail in other courses (e.g., cognitive neuroscience).

Brain Imaging Techniques

  1. Event-Related Potentials (ERPs): ERPs involve placing electrodes on the scalp to measure electrical activity in response to specific events. This technique provides precise timing information about when different cognitive processes occur.
  2. Positron Emission Tomography (PET): PET scans involve injecting a radioactive tracer into the bloodstream and measuring changes in blood flow in the brain. Active brain regions require more blood flow, allowing researchers to infer which areas are involved in specific cognitive tasks.
  3. Functional Magnetic Resonance Imaging (fMRI): fMRI uses magnetic fields and radio waves to measure blood flow in the brain. It provides detailed images of brain activity over time, allowing researchers to see how different brain regions are involved in cognitive tasks.
  4. Transcranial Magnetic Stimulation (TMS): TMS involves using magnetic pulses to stimulate specific brain regions. It can be used to temporarily disrupt normal brain activity and observe the effects on cognition and behavior.

Cognitive Psychology in Practice

The Role of Theories

Cognitive psychologists develop and test theories about how the mind works. For a theory to be scientifically valid, it must be falsifiable. This means that there must be a way to design experiments that could potentially prove the theory wrong. A circular theory is one that cannot be proven wrong. For example, the explanation might rely on the phenomenon it seeks to explain. For instance, explaining aggression by stating that aggressive behavior is caused by a person's aggressive tendencies without further defining or explaining those tendencies would be circular. If you find a person who acts in an aggressive manner you could say they must have aggressive tendencies and the evidence for this is the aggressive behavior. This type of theory is not considered scientifically valid because it cannot be falsified.

The Importance of Testable Theories

A key aspect of cognitive psychology is developing theories that can be rigorously tested through experiments. These theories must be specific enough to make predictions that can be empirically evaluated. For example, a theory about memory processes might predict that certain types of information are recalled more quickly than others. Researchers can design experiments to test these predictions and refine their theories based on the results.

Data and Graphs

This course has a heavy focus on experimental evidence and as such emphasizes data literacy, focusing extensively on interpreting and creating graphs. In graphical representations, the dependent variable (DV), which is the outcome being measured, is plotted on the y-axis. Independent variables (IVs), which are factors that are manipulated in an experiment, are plotted on the x-axis. When multiple IVs are involved, such as in factorial designs, different levels or conditions of the second IV can be distinguished using different colors for bars or lines within the graph. In line graphs, each point plotted represents a specific data point. When you take notes be sure to draw all figures and graphs and when studying be sure to make use of office hours if you have any difficulty understanding these figures. You should also be able to reproduce (from memory) critical aspects of many of the figures shown in class.

Distinctions

Cognitive psychologists make several distinctions that are useful to understand and in this next section we discuss those distinctions.

Brain vs. Mind

One of the primary distinctions in cognitive psychology is between the study of the brain and the study of the mind. The brain refers to the physical structure composed of neurons and neural networks, while the mind encompasses the mental processes and functions that arise from brain activity. For instance, when a person dreams of eating a sandwich, the brain activity can be measured by observing increased blood flow in the occipital lobe (this is the part of the brain at the back of the head). However, cognitive psychologists focus on understanding the thought processes and mental imagery, which cannot be directly observed by describing blood flow in the brain. I note however that the pattern of blood flow in the brain may, in some instances, be correlated with specific thoughts! In fact, it is now becoming possible to make predictions about what someone is thinking about (e.g., using a screwdriver vs. seeing a face) simply by looking at brain-imaging data.

Bottom-Up vs. Top-Down Processing

Bottom-Up Processing:

  • This type of processing starts with an external stimulus and proceeds to an understanding without any preconceived notions or prior knowledge.
  • It relies solely on the sensory input to build a perceptual representation.
  • Example: When first encountering an ambiguous image, such as the objects shown in Figure 1.1, viewers analyze the visual details without any expectations. This is often more difficult to do than situations where additional background information is available.

Top-Down Processing:

  • This involves using prior knowledge, expectations, or context to interpret sensory information.
  • It incorporates information from long-term memory to make sense of new stimuli.
  • Example: Knowing that the image shown in Figure 1.1 is of an object that spins may make it easier to recognize and interpret what is shown.

This figure shows an AI generated closeup image of a toothbrush.


Figure 1.1. This figure shows an AI generated closeup of an everyday object. You may have been able to identify this image without prior information (i.e., bottom-up processing) but it is easier to make sense of this if you are provided additional background information (i.e., top-down processing). For example, it may be easier to identify this object if you are told that it is often used before bed and in the morning.

"Up-close toothbrush." by Kahan, T.A. is licensed under CC BY-NC-SA 4.

Structure vs. Process

Structure:

  • Refers to the mental "workbenches" or storage areas where information processing occurs.
  • Examples include sensory memory, short-term memory (working memory), and long-term memory.
  • These structures are depicted in models as rectangular areas where different types of information are handled.

Process:

  • Refers to the mental activities that move or transform information within these structures.
  • Attention, for instance, is a process that shifts information from sensory memory to short-term memory.
  • Processes are often symbolized with ovals in cognitive models, indicating dynamic activities like encoding, storage, and retrieval. There are many different theories of cognition that have been developed over the years but we will focus on Atkinson and Shiffrin’s (1968) model of cognition which is called the modal model since this is one of the most widely used conceptualizations of cognition. In this model attention is the only process and all other components of the model are structures. This model is certainly imperfect (e.g., it does not make a distinction between different types of long-term memory, etc) but it provides a useful framework that we will build on in this class. The modal model is shown in Figure 1.2.

This figure is a visual representation of Atkinson and Shiffrin’s (1968) modal model of cognition.


Figure 1.2. This figure is a visual representation of Atkinson and Shiffrin’s (1968) modal model of cognition.

"Depiction of modal model." by Kahan, T.A. is licensed under CC BY-NC-SA 4.0

Nature vs. Nurture

This distinction explores whether cognitive abilities and processes are primarily determined by genetics (nature) or by environmental influences (nurture).

  • Nature: Suggests that cognitive abilities are largely inherited. For example, children of parents with strong quantitative skills may also exhibit similar skills due to genetic factors.
  • Nurture: Proposes that experiences and education shape cognitive processes. The quality of math instruction and practice can significantly influence one's quantitative reasoning skills.

Steven Pinker, a prominent cognitive psychologist, leans towards nature in his book The Blank Slate, arguing that many cognitive processes are biologically based. However others like H. Allen Orr have argued for a more nuanced understanding of human behavior that incorporates social, cultural, and environmental factors alongside genetic and evolutionary influences.

Nomothetic vs. Idiographic Approaches

Nomothetic Approach:

  • Focuses on identifying general laws and averages that apply to the cognitive processes of most people.
  • This approach is common in developmental psychology, where researchers might study the average age at which children achieve certain milestones.
  • Example: Determining that the average working memory capacity is 7±2 (which we return to later in the course in the section on short-term memory).

Ideographic Approach:

  • Emphasizes individual differences and unique aspects of cognitive processes.
  • Focuses on how specific individuals differ from the average.
  • Example: Studying individuals with exceptional memory capabilities to understand what makes their cognitive processes unique (e.g., Marilu Henner is an actress famous for her role in the TV sitcom Taxi, with the extraordinary ability to recall autobiographical details and events from her life with remarkable accuracy and detail).

Information Processing vs. Connectionism

Information Processing Approach:

  • Developed from information theory and computer analogy.
  • Views the mind as a system that processes information through a series of stages, similar to how a computer processes data.
  • Involves inputs, processing (which may involve categorization and recoding), and outputs.
  • Initially conceived as serially separable stages, though now also considers parallel processing.
  • Atkinson and Shiffrin’s (1968) modal model, which is shown in Figure 1.2, is an example of an information processing model.

This is a generic depiction of a connectionist network with 3 input nodes and 2 output nodes along with 4 nodes in the hidden layer.

Connectionism (Artificial Intelligence):

  • Uses a neural model as an analogy, inspired by the way neurons in the brain are interconnected.
  • Focuses on creating artificial networks (neural networks) that simulate cognitive processes.
  • Relies on computational speed and the ability of networks to learn through adjusting weights based on inputs and outputs.
  • Figure 1.3 depicts a generic connectionist network with an input layer, hidden layer, and output layer.

Figure 1.3. This is a generic depiction of a connectionist network with 3 input nodes and 2 output nodes along with 4 nodes in the hidden layer.

"Connectionist network." by Kahan, T.A. is licensed under CC BY-NC-SA 4.

Frank Rosenblatt's Perceptron

Frank Rosenblatt's perceptron (1957) represents a neural network model with input nodes, an output node, and weighted connections. This network did not have hidden layers like more sophisticated networks do today. It processes information mathematically, updating weights based on errors, thereby simulating learning. Rosenblatt's perceptron is a foundational model in the field of artificial neural networks and cognitive psychology. It provides a simplistic yet powerful framework for understanding how artificial intelligence can process and learn from stimuli, analogous to some basic aspects of human cognition.

The Perceptron Model

The perceptron consists of several key components:

Inputs (x1, x2, ... xn): These are the signals or data points fed into the perceptron. In the figure depicting Rosenblatt's perceptron (See Figure 1.4) the input nodes are represented as the three circles at the bottom of the network and the input values from left to right are “1”, the x-axis value, and the y-axis value for the point the network is trying to categorize.

Weights (w1, w2, ... wn): Each input node has an associated weight, representing the importance or strength of that input. These weights start as random values between -1 and +1 and then will change as the network learns. These values are represented on the lines that connect the three bottom input units to the one output unit. In the example depicted in Figure 1.4, weights of .1, .2, and -.3 are shown. These values could be any number between -1 and +1 and will be assigned randomly at the start of learning.

output: This is the value that the network outputs after processing the inputs and weights. In the figure depicting Rosenblatt's perceptron the output node is represented as the circle at the top of the network


Figure depicting Rosenblatt's perceptron.  The bottom layer of circles are the input nodes and the upper circle is the output node.

Figure 1.4. Figure depicting Rosenblatt's perceptron. The bottom layer of circles are the input nodes and the upper circle is the output node.

"Example Perceptron." by Kahan, T.A. is licensed under CC BY-NC-SA

The output of the perceptron is determined by the following steps:

  1. Weighted Sum Calculation: The perceptron computes the weighted sum of the inputs. This is done using the formula S=∑(input x weight)
  2. Activation Function Application: The weighted sum is passed through an activation function, to produce the final output. Here, if S is greater than or equal to zero the network will output a 1 and if the value is less than zero it will output a value of 0. In Rosenblatt's perceptron the network categorizes items in a binary manner (1 or 0) where each of these represents distinct categories. Each of these outputs has some connection to the world. So, 1 might indicate that the network is labeling the point road and 0 might indicate that the network is labeling the point grass, as might be the case if the network were categorizing an image of a roadway.

Learning and Adjusting Weights

After the network categorizes a point learning occurs. If the network responds correctly then none of the weights will change. However, if the network categorizes the point incorrectly then all of the weights will change. The weights are adjusted using the learning rule:

Wnew = Wold + η(T-O) input

where T is the target output, O is the output that was given, and η is a constant (quite often a value of .1). Notice that when the network outputs the correct answer T and O will be the same (i.e., both 1 or both 0) and the difference between these values equals zero. For this reason, when the network responds correctly none of the weights will change and Wnew = Wold. However, when the network outputs a value (O) that differs from the target output (T) the new weight will be different from the old weigh

Example Calculation #1

Consider a simple perceptron that is trying to categorize the point depicted in the figure (i.e., the black dot inside the grass). To do this the network needs to use the x and y coordinates of the point as input values. When doing this we will use values between -1 and +1. In the example shown in Figure 1.5 the point that is being processed is represented as (-.1, +.3).

  • Inputs: input1 = 1, input2 = -0.1, input3 = +0.3
  • Weights: w1=0.1, w2=0.2, w3=-0.3
  • Constant: η = 0.1

Rosenblatt's perceptron is shown with random starting weights. Below this is the point the network will process.

Figure 1.5. Rosenblatt's perceptron is shown with random starting weights. Below this is the point the network will process.

"Example perceptron with input data." by Kahan, T.A. is licensed under CC BY-NC-SA 4.0

The weighted sum S is calculated as follows:

S=(1 x .1) = (-.1 x .2) + (.3 x -.3) = .1 + -.02 + -.09 = .1 – 11 = -.01

Applying the activation function:

S<0 so the output is 0, which in our example indicates that the network labels this point “grass”

Learning and Adjusting Weights

The target value (T) is the correct answer (grass) and the output value (O) is what the perceptron labeled the point. Here, since the target value (T) is 0 (i.e., grass) and the perceptron’s output (O) is also 0, the weights will not change.

W1new = W1old + η(T-O) input1 = .1 + .1 (0-0)1 = .1

W2new = W2old + η(T-O) input2 = .2 + .1 (0-0).1 = .2

W3new = W3old + η(T-O) input3 = -.3 + .1 (0-0).3 = -.

Example Calculation #2

Now let’s imagine this same perceptron is trying to categorize a second point (i.e., the white dot inside the grass in Figure 1.6). To do this the network needs to use the x and y coordinates of this new point as input values. In the example shown here the point that is being processed is represented as (+.1, +.1).

Inputs: input1 = 1, input2 = +0.1, input3 = +0.1

Weights: w1=0.1, w2=0.2, w3=-0.3

Constant: η = 0.1

The weighted sum S is calculated as follows:

S=(1 x .1) = (.1 x .2) + (.1 x -.3) = .1 + .02 + -.03 = +.09

The dot shown in white is the next point the network will process.

Figure 1.6. The dot shown in white is the next point the network will process.

"Example perceptron with new input data." by Kahan, T.A. is licensed under CC BY-NC-SA 4.0

Applying the activation function:

S≥0 so the output is 1, which in our example indicates that the network labels this point “road”

Learning and Adjusting Weights

The target value (T) is the correct answer (grass) and the output value (O) is what the perceptron labeled the point. Here, since the target value (T) is 0 (i.e., grass) and the perceptron’s output (O) is 1, the weights will all change.

W1new = W1old + η(T-O) input1 = .1 + .1 (0-1)1 = 0

W2new = W2old + η(T-O) input2 = .2 + .1 (0-1).1 = .19

W3new = W3old + η(T-O) input3 = -.3 + .1 (0-1).1 = -.31

These new weights are depicted in Figure 1.7. Any new datapoint would be processed using these new values.

The perceptron is shown here with the new weights that were calculated after the network categorized the point incorrectly in Example calculation #2.

Figure 1.7. The perceptron is shown here with the new weights that were calculated after the network categorized the point incorrectly in Example calculation #2.

"Example perceptron with new weights." by Kahan, T.A. is licensed under CC BY-NC-SA 4.0

When does learning finish?

The perceptron will process and categorize all of the input points (i.e., all the available data) in one cycle, known as an epoch. This process repeats until the perceptron completes an epoch without making any errors. Consider there are only two input points (like the example shown here). If the perceptron makes an error while processing the second point, it means the first epoch ended with an error. Therefore, the perceptron will continue processing and adjusting the weights until it completes an epoch without any errors. Eventually, if the points are linearly separable the perceptron will learn to classify the points correctly (this may take many epochs).

Practical Application: Self-Driving Cars

Perceptrons form the basis of more complex neural networks used in modern applications, such as self-driving cars. These cars use advanced neural networks to process visual and sensory inputs, make decisions, and navigate autonomously. The learning process involves adjusting the network's weights to correctly classify road features, such as distinguishing between the road and the surrounding environment.

For example, a self-driving car equipped with sensors and cameras gathers data points representing various elements of its surroundings. These data points are fed into a neural network, which processes and classifies them to determine the best route. The weights in the network are adjusted through training, allowing the car to improve its performance over time.

Limitations and Advances

The original perceptron model, while revolutionary, has limitations. It can only classify linearly separable data, meaning it can only solve problems where a single straight line can separate the different classes. This limitation was highlighted by Marvin Minsky and Seymour Papert in their work, which temporarily dampened interest in neural networks.

However, advancements in the 1980s introduced multi-layer perceptrons, also known as deep learning networks. These networks include additional layers of neurons, called hidden layers, and incorporate both excitatory and inhibitory connections. This allows them to learn and classify more complex, non-linearly separable data, significantly expanding their applicability.

Speech Recognition Example

An example of advanced neural networks is their use in speech recognition. By analyzing spectrograms (visual representations of sound frequencies over time), neural networks can classify and understand different speech sounds. For instance, a network can be trained to recognize various pronunciations of a word, such as "head," "hid," "hood," and "had," by processing the frequency and time data.

Through training with diverse voices, these networks become highly accurate in recognizing and classifying speech, enabling technologies like virtual assistants and automated customer service systems.

Connectionism and Artificial Intelligence

Connectionism refers to the approach of using neural networks to model cognitive processes, emphasizing the idea that mental phenomena arise from the interactions of simple units (neurons). This approach is closely related to artificial intelligence (AI), which aims to create systems that can perform tasks requiring human-like intelligence.

The distinction between connectionism and AI lies primarily in their focus and applications. Connectionism is more concerned with understanding and modeling human cognitive processes, while AI focuses on developing practical applications, such as self-driving cars, speech recognition, and other intelligent systems like chatbots. AI is not concerned with simulating cognition in order to better the mechanisms involved in human thought processes.

Conclusion

Cognitive psychology provides a scientific framework for understanding mental processes and their influence on behavior. By developing and testing theories, conducting experiments, and utilizing advanced brain imaging techniques, cognitive psychologists aim to unravel the complexities of the human mind. This field continues to evolve, incorporating new methodologies and technologies to enhance our understanding of cognition. As you continue your studies in cognitive psychology, remember to take detailed notes, ask questions, and engage with the material to deepen your understanding of this fascinating field.

Frank Rosenblatt's perceptron laid the groundwork for modern neural networks and cognitive psychology. Despite its initial limitations, subsequent advancements have led to powerful AI applications that mimic aspects of human cognition. By understanding the principles of perceptrons and neural networks, we gain insights into both artificial and human intelligence, paving the way for future innovations in technology and cognitive science. If you have any difficulty understanding where any of the numbers came from in the two perceptron examples provided in this chapter then you should make an appointment to meet during office hours.

Note: The object shown in Figure 1.1 is a toothbrush.

Annotate

Next Chapter
Chapter 2: Sensory Memory
PreviousNext
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org