• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Understanding and Developing Visual-Spatial Intelligence

How good are you at puzzles?

Dr. Amy Marschall is an autistic clinical psychologist with ADHD, working with children and adolescents who also identify with these neurotypes among others. She is certified in TF-CBT and telemental health.

spatial intelligence problem solving

Rachel Goldman, PhD FTOS, is a licensed psychologist, clinical assistant professor, speaker, wellness expert specializing in eating behaviors, stress management, and health behavior change.

spatial intelligence problem solving

merteren / E+ / Getty

  • A Note on Intelligence Testing

Examples of Visual-Spatial Intelligence

Assessing visual-spatial intelligence, developing visual-spatial intelligence, exercising your visual-spatial intelligence muscles.

While the field of psychology has struggled for decades to come to an agreement on a comprehensive definition of “ intelligence ,” it is generally recognized that people have varying innate abilities when it comes to acquiring certain skills and knowledge.

Visual-spatial intelligence is one such set of skills that includes the ability to perceive, hold, manipulate, and problem-solve from visual information. When you put together a puzzle, you use visual-spatial skills to identify which pieces have similar colors that go near each other or similar shapes that will fit together.

The concept of visual-spatial intelligence is part of Howard Gardner ’s theory of multiple intelligences, which posits that there are multiple ways for someone to be “intelligent” and that different intelligences come with different strengths. Gardner believed that a singular theory of intelligence drastically overlooked many people’s skills.

Dr. Richard Kraft, Ph.D. and professor of cognitive psychology at Otterbein University, says that “Visual-spatial intelligence is our ability to think about the world in three dimensions. We use visual-spatial intelligence to find our way around and to manipulate mental images of objects and the spaces these objects are in. People with strong visual-spatial intelligence have a good sense of direction, and they know how parts fit together into a whole (like assembling furniture from IKEA).”

According to Dr. Kraft, “We can be accomplished at writing and talking (linguistic intelligence) but have a poor sense of direction (visual-spatial intelligence).” (He may have been talking about the author of this article.)

Learn more about the skills involved in visual-spatial intelligence, how to assess your own visual-spatial abilities, and tips for honing your visual-spatial skills.

First, a Note on Intelligence Testing

Much early research on the concept of intelligence was conducted by white supremacists aiming to develop measures that could “prove” white superiority, and modern intelligence tests continue to uphold oppressive standards by exhibiting ongoing racial bias.

Additionally, the concept of “intelligence” has been used to justify involuntary sterilization of thousands of people on the grounds that they had “inferior” genetics and should not be permitted to reproduce.

Although people have varying levels of skill in different areas, and those who struggle in some areas might need support in order to live their best lives, using concepts like intelligence to decide who is “worthy” to reproduce is eugenicist and harmful.

As such, discussions of the concept of “intelligence” should include acknowledgement of the racist and ableist roots of intelligence testing, and future research must consciously work to undo the harm caused by the field. With this in mind, it can still be beneficial on a personal level to understand individual strengths and knowledge, as well as developing skills.

According to Dr. Kraft, people who have strong visual-spatial intelligence “have a good sense of direction. They can solve puzzles more easily than other people, especially something like the Rubik’s Cube. They can walk into a house and imagine what it would look like after knocking out a wall. Understanding architecture and choreography and film directing comes easily to people with strong visual-spatial intelligence .”

On the other hand, those who struggle with visual-spatial abilities “often get lost, even in familiar spaces, even in buildings they’ve visited many times. They generally have a poor sense of direction and have difficulty thinking in three dimensions.”

When you problem-solve with visual information, put together pieces of a puzzle, or visualize something, you are tapping into your visual-spatial intelligence.

The Wechsler intelligence scales , including the Wechsler Intelligence System for Children, Fifth Edition, and the Wechsler Adult Intelligence Scales, Fourth Edition , have Visual-Spatial Index scores which purport to indicate an individual’s visual-spatial intelligence.

Although these tests have the bias issues noted earlier in this article, they can serve as a starting point for assessing one’s ability to manipulate visual information. According to Dr. Kraft, “Standardized assessment usually takes the form of answering questions about drawings of abstract three-dimensional objects. Tests ask what an object or shape will look like if manipulated in some way—often after three-dimensional rotation.”

Dr. Kraft says that it is possible to self-evaluate your visual-spatial skills. You can practice visualizing and manipulating information in your head, or you can see how you perform on visual puzzles and even time yourself as you attempt these problems.

He also recommends finding a quick online test that you can use to assess your visual-spatial intelligence. While online tests cannot definitively determine an individual’s cognitive abilities, they can be a fun starting point to getting to know your own strengths a bit better.

There is disagreement in the field of psychology regarding individuals’ abilities to develop or increase intelligence. Our intellectual abilities are influenced by both genetics and environment . Some types of intelligence are considered dynamic, or changing; for instance, our verbal abilities tend to improve with education. Others are considered static, or fixed. As such, it may be difficult or impossible to change your visual-spatial intelligence even if you can work to build certain skills.

Dr. Kraft stated: “We probably cannot increase our raw visual-spatial intelligence, but we do learn to compensate.” He shared himself as an example: “As it happens, my visual spatial intelligence isn’t strong, and I have difficulty finding my way around. GPS has largely removed that problem.”

Those with weaker visual-spatial intelligence might also compensate because they are stronger in another intelligence. Per Dr. Kraft, someone who struggles with visual-spatial tasks but is good at memorization might be able to remember landmarks or other cues to help them with their sense of direction. Additionally, they can ask for help, such as having a friend go with them to new locations to ensure they do not get lost.

Although we may not be able to significantly change our intelligence, there are activities we can do to maximize our potential. We can also use these activities to mitigate cognitive decline as we age.

Skills that require using your visual-spatial intelligence include:

  • Solving a Rubik’s Cube
  • Completing mazes
  • Putting puzzles together
  • Reading maps

These activities can both demonstrate your visual-spatial intelligence and allow you to flex your visual-spatial muscles and strengthen your skills in this area. These kinds of brain exercises can strengthen your skills and help you with your sense of direction, problem-solving, and mentally manipulating visual information.

Visual-spatial intelligence is only one of many potential strengths an individual can possess. You can use the activities described here to try and strengthen your visual-spatial abilities. Remember that there is more than one set of skills that goes into being “intelligent,” and struggling in one or many areas is not a personal failing.

Gardner H. Intelligence Reframed: Multiple Intelligences for the 21st Century. New York: Basic Books; 1999.

Croizet JC.  The Racism of Intelligence: How Mental Testing Practices Have Constituted an Institutionalized Form of Group Domination . Oxford University Press; 2012.

Ajitha Reddy, The Eugenic Origins of IQ Testing: Implications for Post-Atkins Litigation, 57 DePaul L. Rev. 667 (2008) Available at: https://via.library.depaul.edu/law-review/vol57/iss3/5 

Buschkuehl, M., & Jaeggi, S. M. (2010). Improving intelligence: A literature review. Swiss medical weekly, 140(1920), 266-272.

By Amy Marschall, PsyD Dr. Amy Marschall is an autistic clinical psychologist with ADHD, working with children and adolescents who also identify with these neurotypes among others. She is certified in TF-CBT and telemental health.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 27 January 2023

Foundations of human spatial problem solving

  • Noah Zarr 1 &
  • Joshua W. Brown 1  

Scientific Reports volume  13 , Article number:  1485 ( 2023 ) Cite this article

1702 Accesses

2 Citations

8 Altmetric

Metrics details

  • Cognitive neuroscience
  • Computational neuroscience

Despite great strides in both machine learning and neuroscience, we do not know how the human brain solves problems in the general sense. We approach this question by drawing on the framework of engineering control theory. We demonstrate a computational neural model with only localist learning laws that is able to find solutions to arbitrary problems. The model and humans perform a multi-step task with arbitrary and changing starting and desired ending states. Using a combination of computational neural modeling, human fMRI, and representational similarity analysis, we show here that the roles of a number of brain regions can be reinterpreted as interacting mechanisms of a control theoretic system. The results suggest a new set of functional perspectives on the orbitofrontal cortex, hippocampus, basal ganglia, anterior temporal lobe, lateral prefrontal cortex, and visual cortex, as well as a new path toward artificial general intelligence.

Similar content being viewed by others

spatial intelligence problem solving

Machine learning reveals the control mechanics of an insect wing hinge

spatial intelligence problem solving

Memorability shapes perceived time (and vice versa)

spatial intelligence problem solving

The development of human causal learning and reasoning

Introduction.

Great strides have been made recently toward solving hard problems with deep learning, including reinforcement learning 1 , 2 . While these are groundbreaking and show superior performance over humans in some domains, humans nevertheless exceed computers in the ability to find creative and efficient solutions to novel problems, especially with changing internal motivation values 3 . Artificial general intelligence (AGI), especially the ability to learn autonomously to solve arbitrary problems, remains elusive 4 .

Value-based decision-making and goal-directed behavior involve a number of interacting brain regions, but how these regions might work together computationally to generate goal directed actions remains unclear. This may be due in part to a lack of mechanistic theoretical frameworks 5 , 6 . The orbitofrontal cortex (OFC) may represent both a cognitive map 7 and a flexible goal value representation 8 , driving actions based on expected outcomes 9 , though how these guide action selection is still unclear. The hippocampus is important for model-based planning 10 and prospection 11 , and the striatum is important for action selection 12 . Working memory for visual cues and task sets seems to depend on the visual cortex and lateral prefrontal regions, respectively 13 , 14 .

Neuroscience continues to reveal aspects of how the brain might learn to solve problems. Studies of cognitive control highlight how the brain, especially the prefrontal cortex, can apply and update rules to guide behavior 15 , 16 , inhibit behavior 17 , and monitor performance 18 to detect and correct errors 19 . Still, there is a crucial difference between rules and goals. Rules define a mapping from a stimulus to a response 20 , but goals define a desired state of the individual and the world 21 . When cognitive control is re-conceptualized as driving the individual to achieve a desired state, or set point, then cognitive control becomes a problem amenable to control theory.

Control theory has been applied to successfully account for the neural control of movement 22 and has informed various aspects of neuroscience research, including work in C. Elegans 23 , and work on controlling states of the brain 24 and electrical stimulation placement methods 25 (as distinct from behavioral control over states of the world in the present work), and more loosely in terms of neural representations underlying how animals control an effector via a brain computer interface 26 . In Psychology, Perceptual Control Theory has long maintained that behavior is best understood as a means of controlling perceptual input in the sense of control theory 27 , 28 .

In the control theory framework, a preferred decision prospect will define a set point, to be achieved by control-theoretic negative feedback controllers 29 , 30 . Problem solving then requires 1) defining the goal state; 2) planning a sequence of state transitions to move the current state toward the goal; and 3) generating actions aimed at implementing the desired sequence of state transitions.

Algorithms already exist that can implement such strategies, including the Dijkstra and A* algorithms 31 , 32 and are commonly used in GPS navigation devices found in cars and cell phones. Many variants of reinforcement learning solve a specific case of this problem, in which the rewarded states are relatively fixed, such as winning a game of Go 33 . While deep Q networks 1 and generative adversarial networks with monte carlo tree search 33 are very powerful, what happens when the goals change, or the environmental rules change? In that case, the models may require extensive retraining. The more general problem requires the ability to dynamically recalculate the values associated with each state as circumstances, goals, and set points change, even in novel situations.

Here we explore a computational model that solves this more general problem of how the brain solves problems with changing goals 34 , and we show how a number of brain regions may implement information processing in ways that correspond to specific model components. While this may seem an audacious goal, our previous work has shown how the GOLSA model can solve problems in the general sense of causing the world to assume a desired state via a sequence of actions, as described above 34 . The model begins with a core premise: the brain constitutes a control-theoretic system, generating actions to minimize the discrepancy between actual and desired states. We developed the Goal-Oriented Learning and Selection of Action (GOLSA) computational neural model from this core premise to simulate how the brain might autonomously learn to solve problems, while maintaining fidelity to known biological mechanisms and constraints such as localist learning laws and real-time neural dynamics. The constraints of biological plausibility both narrow the scope of viable models and afford a direct comparison with neural activity.

The model treats the brain as a high-dimensional control system. It drives behavior to maintain multiple and varying control theoretic set points of the agent’s state, including low level homeostatic (e.g. hunger, thirst) and high level cognitive set points (e.g. a Tower of Hanoi configuration). The model autonomously learns the structure of state transitions, then plans actions to arbitrary goals via a novel hill-climbing algorithm inspired by Dijkstra’s algorithm 32 . The model provides a domain-general solution to the problem of solving problems and performs well in arbitrary planning tasks (such as the Tower of Hanoi) and decision-making problems involving multiple constraints 34 (“ Methods ”).

The GOLSA model works by representing each possible state of the agent and environment in a network layer, with multiple layers each representing the same sets of states (Fig.  1 A,B). The Goal Gradient layer is activated by an arbitrarily specified desired (Goal) state and spreads activation backward along possible state transitions represented as edges in the network 35 , 36 . This value spreading activation generates current state values akin to learned state values (Q values) in reinforcement learning, except that the state values can be reassigned and recalculated dynamically as goals change. This additional flexibility allows goals to be specified dynamically and arbitrarily, with all state values being updated immediately to reflect new goals, thus overcoming a limitation of current RL approaches. Essentially, the Goal Gradient is the hill to climb to minimize the discrepancy between actual and desired states in the control theoretic sense. In parallel, regarding the present state of the model system, the Adjacent States layer receives input from a node representing the current state of the agent and environment, which in turn activates representations of all states that can be achieved with one state transition. The valid adjacent states then mask the Goal Gradient layer to yield the Desired Next State representation. In this layer, the most active unit represents a state which, if achieved, will move the agent one step closer to the goal state. This desired next state is then mapped onto an action (i.e. a controller signal) that is likely to effect the desired state transition. In sum, the model is given an arbitrarily specified goal state and the actual current state of the actor. It then finds an efficient sequence of states to transit in order to reach the goal state, and it generates actions aimed at causing the current state of the world to be updated so that it approaches and reaches the goal state.

figure 1

( A ) The GOLSA model determines the next desired state by hill climbing. Each layer represents the same set of states, one per neuron. The x- and y-axes of the grids represent abstracted coordinates in a space of states. Neurons are connected to each other for states that are reachable from another by one action, in this case neighbors in the x,y plane. The Goal state is activated and spreads activation through a Goal Gradient (Proximity) layer, thus dynamically specifying the value of each state given the goal, so that value is greater for states nearer the goal state. The Current State representation activates all Adjacent States, i.e. that can be achieved with one state transition. These adjacent states mask the Goal Gradient input to the Desired Next State, so that the most active unit in the Desired Next State represents a state attainable with one state transition and which will bring the state most directly toward the goal state. The black arrows indicate that the Desired Next State unit activities are the element-wise products of the corresponding Adjacent States and Goal Gradient unit activities. The font colors match the model layer to corresponding brain regions in Figs. 3 and 4 . ( B ) The desired state transition is determined by the conjunction of current state and desired next state. The GOLSA model learns a mapping from desired state transitions to the actions that cause those transitions. After training, the model can generate novel action sequences to achieve arbitrary goal states. Adapted from 34 .

Here we test whether and how the GOLSA model might provide an account of how various brain regions work together to drive goal-directed behavior. To do this, we ask human subjects to perform a multi-step task to achieve arbitrary goals. We then train the GOLSA model to perform the same task, and we use representational similarity analysis (RSA) to ask whether specific GOLSA model layers show similar representations to specific brain regions ( Supplementary Material ). The results will provide a tentative account of the function of specific brain regions in terms of the GOLSA model, and this account can then be tested and compared against alternative models in future work.

Study design

The details of the model implementation and the model code are available in the “ Methods ”. Behaviorally, we found that the GOLSA model is able to learn to solve arbitrary problems, such as reaching novel states in the Tower of Hanoi task (Fig.  2 A). It does this without hard-wired knowledge, simply by making initially random actions and learning from the outcomes, then synthesizing the learned information to achieve whatever state is specified as the goal state.

figure 2

( A ) The GOLSA model learns to solve problems, achieving arbitrary goal states. It does this by making arbitrary actions and observing which actions cause which state transitions. Figure adapted from earlier work 34 , 37 . ( B ) Treasure Hunt task. Both the GOLSA model and the human fMRI subjects performed a simple treasure hunt task, in which subjects were placed in one of four possible starting locations, then asked to generate actions to reach any of the other possible locations. To test multi-step transitions, subjects had to first move to the location of a key needed to unlock a treasure chest, then move to the treasure chest location. Participants first saw an information screen specifying the contents of each of the four states (‘you’, ‘key’, ‘chest’, or ‘nothing’). After a jittered delay, participants selected a desired movement direction and after another delay saw an image of the outcome location. The mapping of finger buttons to game movements was random on each trial and revealed after subjects were given the task and had to plan their movements, thus avoiding motor confounds during planning. Bottom: The two state-space maps used in the experiment. One map was used in the first half of trials while the other was used in the second half, in counterbalanced order.

Having found that the model can learn autonomously to solve arbitrary problems, we then aimed to identify which brain regions might show representations and activity that matched particular GOLSA model layers. To do this, we tested the GOLSA model with a Treasure Hunt task (Fig.  2 B and “ Methods ”), which was performed by both the GOLSA model and human subjects with fMRI. All human subjects research here was approved by the IRB of Indiana University, and subjects gave full informed consent. The human subjects research was performed in accordance with relevant guidelines/regulations and in accordance with the Declaration of Helsinki. Subjects were placed in one of four starting states and had to traverse one or two states to achieve a goal, by retrieving a key and subsequently using it to unlock a treasure chest for a reward (Fig.  2 B). The Treasure Hunt task presents a challenge to standard RL approaches, because the rewarded (i.e. goal) state changes regularly. In an RL framework, the Bellman equation would regularly relearn the value of each possible state in terms of how close it is to the currently rewarded state, forgetting previous state values in the process.

Representational similarity analysis

To analyze the fMRI and model data, we used model-based fMRI with representational similarity analysis (RSA) 38 (“ Methods ”). RSA considers a set of task conditions and asks whether a model, or brain region, can discriminate between the patterns of activity associated with the two conditions, as measured by a correlation coefficient. By considering every possible pairing of conditions, the RSA method constructs a symmetric representational dissimilarity matrix (RDM), where each entry is 1-r, and r is the correlation coefficient. This RDM provides a representational fingerprint of what information is present, so that the fingerprints can be compared between a model layer and a given brain region. For our application of RSA, each representational dissimilarity matrix (RDM) represented the pairwise correlations across 96 total patterns–4 starting states by 8 trial types by 3 time points within a trial (problem description, response, and feedback). For each model layer, the pairwise correlations are calculated with the activity pattern across layer cells in one condition vs. the activity pattern in the same layer in the other condition. For each voxel in the brain, the pairwise correlations are calculated with the activity pattern in a local neighborhood of radius 10 mm (93 voxels total) around the voxel in question, for one condition vs. the other condition. The 10 mm radius was chosen to provide a tradeoff between a sufficiently high number of voxels for pattern analysis and a sufficiently small area to identify specific regions. The fMRI RSA maps are computed for each subject over all functional scans and then tested across subjects for statistical significance. The comparison between GOLSA model and fMRI RDMs consists of looking for positive correlations between elements of the upper symmetric part of a given GOLSA model layer RDM vs. the RDM around a given voxel in the fMRI RDMs. The resulting fMRI RSA maps, one per GOLSA model layer, show which brain regions have representational similarities between particular model components and particular brain regions. The fMRI RSA maps showing the similarities between a given GOLSA model layer and a given brain region are computed for each subject and then tested across subjects for statistical significance in a given brain region, with whole-brain tests for significance in all cases. Full results are in Table 2 , and method details are in the “ Methods ” section. As a control, we also generated a null model layer that consisted of normally distributed noise (μ = 1, σ = 1). In the null model, no voxels exceeded the cluster defining threshold, and so no significant clusters were found, which suggests that the results below are not likely to reflect artifacts of the analysis methods.

Orbitofrontal cortex, goals, and maps

We found that the patterns of activity in a number of distinct brain regions match those expected of a control theoretic system, as instantiated in the GOLSA model (Figs. 3 A,B and 4 A–C); Table 1 ). Orbitofrontal cortex (OFC) activity patterns match model components that represent both a cognitive map 7 and a flexible goal value representation 8 , specifically matching the Goal and Goal Gradient layer activities. These layers represent the current values of the goal state and the current values of states near the goal state, respectively. The Goal Gradient layer incorporates cognitive map information in terms of which states can be reached from which other states. This suggests mechanisms by which OFC regions may calculate the values of states dynamically as part of a value-based decision process, by spreading activation of value from a currently active goal state representation backward. The GOLSA model representations of the desired next state also match overlapping regions in the orbitofrontal cortex (OFC) and ventromedial prefrontal cortex (vmPFC), consistent with a role in finding the more valuable decision option (Fig.  3 ). Reversal learning and satiety effects as supported by the OFC reduce to selecting a new goal state or deactivating a goal state respectively, which immediately updates the values of all states. Collectively this provides a mechanistic account of how value-based decision-making functions in OFC and vmPFC.

figure 3

Representational Similarity Analysis (RSA) of model layers vs. human subjects performing the same Treasure Hunt task. All results shown are significant clusters across the population with a cluster defining threshold of p  < 0.001 cluster corrected to p  < 0.05 overall, and with additional smoothing of 8 mm FWHM applied prior to the population level t-test for visualization purposes. ( A ) population Z maps showing significant regions of similarity to model layers in orbitofrontal cortex. Cf. Figure  1 and Fig.  5 B. The peak regions of similarity for goal-gradient and goal show considerable overlap in right OFC. The region of peak similarity for simulated-state is more posterior. To most clearly show peaks of model-image correspondence, the maps of gradient and goal are here visualized at p  < 0.00001 while all others are visualized at p  < 0.001. ( B ) Z maps showing significant regions of similarity to model layers in right temporal cortex. The peak regions of similarity for goal-gradient and goal overlap and extend into the OFC. The peak regions of similarity for adjacent-states, next-desired-state, and -simulated-state occur in similar but not completely overlapping regions, while the cluster for queue-store is more lateral. ( C ) Fig.  1 A, copied here as a legend, where the font color of each layer name corresponds to the region colors in panels ( A) and ( B) .

figure 4

Representational Similarity Analysis of model layers vs. human subjects performing the same Treasure Hunt task, with the same conditions and RSA analysis as in Fig.  3 . ( A ) Population Z maps showing significant regions of similarity to model layers in visual cortex. The peak regions of similarity for goal-gradient and goal overlap substantially, primarily in bilateral cuneus, inferior occipital gyrus, and lingual gyrus. The simulated-state layer displayed significantly similar activity to that in a smaller medial and posterior region. Statistical thresholding and significance are the same as Fig.  3 . ( B ) Z map showing significant regions of similarity to the desired-transition layer. Similarity peaks were observed for desired-transition in bilateral hippocampal gyrus as well as bilateral caudate and putamen. The desired-transition map displayed here was visualized at p  < 0.00001 for clarity. ( C ) Z maps showing significant regions of similarity to the model layers in frontal cortex. Similarity peaks were observed for queue-store in superior frontal gyrus (BA10). Action-output activity most closely resembled activity in inferior frontal gyrus (BA9), while simulated-state and goal-gradient patterns of activity were more anterior (primarily BA45). Similarity between activity in the latter two layers and activity in OFC, visual cortex, and temporal pole is also visible.

Lateral PFC and planning

The GOLSA model also incorporates a mechanism that allows multi-step planning, by representing a Simulated State as if the desired next state were already achieved, so that the model can plan multiple subsequent state transitions iteratively prior to committing to a particular course of action (Fig.  5 B). Those subsequent state transitions are represented in a Queue Store layer pending execution via competitive queueing, in which the most active action representation is the first to be executed, followed by the next most active representation, and so on 39 , 40 . This constitutes a mechanism of prospection 41 and planning 42 . The Simulated State layer in the GOLSA model shows strong representational similarity with regions of the OFC and anterior temporal lobe, and the Queue Store layer shows strong similarity with the anterior temporal lobe and lateral prefrontal cortex. This constitutes a mechanistic account of how the vmPFC and OFC in particular might contribute to multi-step goal-directed planning, and how plans may be stored in lateral prefrontal cortex.

figure 5

( A ) Full diagram of core model. Each rectangle represents a layer and each arrow a projection. The body is a node, and two additional nodes are not shown which provide inhibition at each state-change and oscillatory control. The colored squares indicate which layers receive inhibition from these nodes. Some recurrent connections not shown. ( B ) Full diagram of extended model, with added top row representing ability to plan multiple state transition steps ahead (Simulated State, Queue Input, Queue Store, and Queue Output layers). Adapted with permission from earlier work 34 .

Visual cortex and future visual states

The visual cortex also shows representational patterns consistent with representing the goal, goal gradient, and simulated future states (Figs.  3 B and 4 ). This suggests a role for the visual cortex in planning, in the sense of representing anticipated future states beyond simply representing current visual input. Future states in the present task are represented largely by images of locations, such as an image of a scarecrow or a house. In that sense, an anticipated future state could be decoded as matching the representation of the image of that future state. One possibility is that this reflects an attentional effect that facilitates processing of visual cues representing anticipated future states. Another possibility is that visual cortex activity signals a kind of working memory for anticipated future visual states, similar to how working memory for past visual states has been decoded from visual cortex activity 14 . This would be distinct from predictive coding, in that the activity predicts future states, not current states 43 . In either case, the results are consistent with the notion that the visual cortex may not be only a sensory region but may play some role in planning by representing the details of anticipated future states.

Anterior temporal lobe and planning

The anterior temporal lobe likewise shows representations of the goal, goal gradient, the adjacent states, the next desired state, and simulated future and queue store states (Figs.  3 B, 4 C). In one sense this is not surprising, as the states of the task are represented by images of objects, and visual objects (especially faces) are represented in the anterior temporal lobe 44 . Still, the fact that the anterior temporal lobe shows representations consistent with planning mechanisms suggests a more active role in planning beyond feedforward sensory processing as commonly understood 45 .

Hippocampal region and prospection

Once the desired next state is specified, it must be translated to an action. The hippocampus and striatum match the representations of the Desired Transition layer in the GOLSA model. This model layer represents a conjunction of the current state and desired next state transitions, which in the GOLSA model is a necessary step toward selecting an appropriate action to achieve the desired transition. This is consistent with the role of the hippocampus in prospection 41 , and it suggests computational and neural mechanisms by which the hippocampus may play a key role in turning goals into predictions about the future, for the purpose of planning actions 10 , 11 . Finally, as would be expected, the motor output representations in the GOLSA model match motor output patterns in the motor cortex (Fig.  4 C).

The results above show how a computational neural model, the GOLSA model, provides a novel computational account of a number of brain regions. The guiding theory is that a substantial set of brain regions function together as a control-theoretic mechanism 47 , generating behaviors to minimize the discrepancy between the current state and the desired (goal) state. The OFC is understood as including neurons that represent the value of various states in the world, such as the value of acquiring certain objects. Greater activity of an OFC neuron corresponds with more value of its represented state given the current goals. Because of spreading activation, neurons will be more active if they represent states closer to the goal. This results in value representations similar to those provided by the Bellman equation of reinforcement learning 48 , with the difference being that spreading activation can instantly reconfigure the values of states as goals change, without requiring extensive iterations of the Bellman equation.

Given the current state and the goal state, the next desired state can be determined as a nearby state that can be reached and that also moves the current state of the world closer to the goal state. Table 1 shows these effects in the medial frontal gyrus, putamen, superior temporal gyrus, pons, and precuneus. The GOLSA model suggests this is computed as the activation of available state representations, multiplied by the OFC value for that state. Precedent for this kind of multiplicative effect has been shown in the attention literature 49 . The action to be generated is represented by neural activity in the motor cortex region. This in turn is determined on the basis of neurons that are active specifically for a conjunction of the particular current state and next desired state. Neurally, we find this conjunction represented across a large region including the striatum and hippocampus. This is consistent with the notion of the hippocampus as a generative recurrent neural network, that starts at a current state and runs forward, specifically toward the desired state 50 . The striatum is understood as part of an action gate that permits certain actions in specific contexts, although the GOLSA model does not include an explicit action gate 51 . Where multiple action steps must be planned prior to executing any of them, the lateral PFC seems to represent a queue of action plans in sequence, as sustained activity representing working memory 39 , 52 . By contrast, working memory representations in the visual cortex apparently represent the instructed future states as per the instructions for each task trial, and these are properly understood as visual sensory rather than motor working memories 14 .

Our findings overall bear a resemblance to the Free Energy principle. According to this, organisms learn to generate predictions of the most likely (i.e. rewarding) future states under a policy, then via active inference emit actions to cause the most probable outcomes to become reality, thus minimizing surprise 53 , 54 . Like active inference, the GOLSA model emits actions to minimize the discrepancy between the actual and predicted state. Of note, the GOLSA model specifies the future state as a desired state rather than a most likely state. This crucial distinction allows a state that has a high current value to be pursued, even if the probability of being in that state is very low (for example buying lottery tickets and winning). Furthermore, the model includes the mechanisms of Fig.  1 , which allow for flexible planning given arbitrary goals. The GOLSA model is a process model and simulates rate-coded neural activity as a dynamical system (“ Methods ”), which affords the more direct comparison with neural activity representations over time as in Figs.  3 and 4 .

The GOLSA model, and especially our analysis of it, builds on recent work that developed methods to test computational neural models against empirical data. Substantial previous work has demonstrated how computational neural modeling can provide insight into the functional properties underlying empirical neural data, such as recurrent neural networks elucidating the representational structure in anterior cingulate 19 , 55 , 56 and PFC 57 ; deep neural networks accounting for object recognition in IT with representational similarity analysis 58 , and encoding/decoding of visual cortex representations 59 ; dimensionality reduction for comparing neural recordings and computational neural models 60 , and representations of multiple learned tasks in computational neural models 61 .

The GOLSA model shares some similarity with model-based reinforcement learning (MBRL), in that both include learned models of next-state probabilities as a function of current state and action pairs. Still, a significant limitation of both model-based and model free RL is that typically there is only a single ultimate goal, e.g. gaining a reward or winning a game. Q-values 62 are thus learned in order to maximize a single reward value. This implies several limitations: (1) that Q values are strongly paired with corresponding states; (2) that there is only one Q value per state at a given time, as in a Markov decision process (MDP), and (3) Q values are generally updated via substantial relearning. In contrast, real organisms will find differing reward values associated with different goals at different times and circumstances. This implies that goals will change over time, and re-learning Q-values with each goal change would be inefficient. Instead, a more flexible mechanism will dynamically assign values to various goals and then plan accordingly. The GOLSA model exemplifies this approach, essentially replacing the learned Q values of MBRL and MDPs with an activation-based representation of state value, which can be dynamically reconfigured as goals change. This overcomes the three limitations above.

Our work has several limitations. First, regarding the GOLSA model itself, the main limitation is its present implementation of one-hot state representations. This makes a scale-up to larger and continuous state spaces challenging. Future work may overcome this limitation by replacing the one-hot representations with vector-valued state representations and the learned connections with deep network function approximators. This would require corresponding changes in the search mechanisms of Fig.  1 A, from parallel, spreading activation to a serial, monte carlo tree search mechanism. This would be consistent with evidence of serial search during planning 63 , 64 and would afford a new approach to artificial general intelligence that is both powerful and similar to human brain function. Another limitation is that the Treasure Hunt task is essentially a spatial problem solving task. We anticipate that the GOLSA model could be applied to solve more general, non-spatial problems, but this remains to be demonstrated.

The fMRI analysis here has several limitations as well. First, a correspondence of representations does not imply a correspondence of computations, nor does it prove the model correct in an absolute sense 65 . There are other computational models that use diffusion gradients to solve goal-directed planning 66 , and more recent work with deep networks to navigate from arbitrary starting to arbitrary ending states 50 . The combined model and fMRI results here constitute a proposed functional account of the various brain regions, but our results do not prove that the regions compute exactly what the corresponding model regions do, nor can we definitively rule out competing models. Nevertheless the ability of the model to account for fMRI data selectively in specific brain regions suggests that it merits further investigation and direct tests against competing models, as a direction for future research. Future work might compare other models besides GOLSA against the fMRI data using RSA, to ascertain whether other model components might provide a better fit to, and account of, specific brain regions. While variations of model-based and model-free reinforcement learning models would seem likely candidates, we know of only one model by Banino et al. 50 endowed with the ability to flexibily switch goals and thus perform the treasure hunt task as does the GOLSA model 34 . It would be instructive to compare the overall abilities of GOLSA and the model of Banino et al. to account the RDMs of specific brain regions in the Treasure Hunt task, although it is unclear how to do a direct comparison given that the two models consist of very different mechanisms.

The GOLSA model may in principle be extended hierarchically. The frontal cortex has a hierarchical representational structure, in which higher levels of a task may be represented as more anterior 67 . Such hierarchical structure has been construed to represent higher, more abstract task rules 13 , 15 , 68 . The GOLSA model suggests another perspective, that higher level representations consist of higher level goals instead of higher level rules. In the coffee-making task for example 69 , the higher level task of making coffee may require a lower level task of boiling water. If the GOLSA model framework were extended hierarchically, the high level goal of having coffee prepared would activate a lower level goal of having the water heated to a specified temperature. The goal specification framework here is intrinsically more robust than a rule or schema based framework–rules may fail to produce a desired outcome, but if an error occurs in the GOLSA task performance, replanning simply calculates the optimal sequence of events from whatever the current state is, and the error will be automatically addressed.

This incidentally points to a key difference between rules and goals, in that task rules define a mapping from stimuli to responses 15 , in a way that is not necessarily teleological. Goals, in contrast, are by definition teleological. This distinction roughly parallels that between model-free and model-based reinforcement learning 70 The rule concept, as a stimulus–response mapping, implies that an error is a failure to generate the action specified by the stimulus, regardless of the final state of a system. In contrast, the goal concept implies that an error is precisely a failure to generate the desired final state of a system. Well-learned actions may acquire a degree of automaticity over time 71 , but arguably the degree of automaticity is independent of whether an action is rule oriented vs. goal-directed. If a goal-directed action becomes automatized, this does not negate the teleological nature, namely that errors in the desired final state of the world can be detected and lead to corrective action to achieve the desired final state. Rule-based action, whether deliberate or automatized, does not necessarily entail corrective action to achieve a desired state. Where actions are generated, and possibly corrected, to achieve a desired state of the world, this may properly be referred to as goal-directed behavior.

We have investigated the GOLSA model here to examine whether and how it might account for the function of specific brain regions. With RSA analysis, we found that specific layers of the GOLSA model show strong representational similarities with corresponding brain regions. Goals and goal value gradients matched especially the orbitofrontal cortex, and also some aspects of the visual and anterior temporal cortices. The desired transition layer matched representations in the hippocampus and striatum, and simulated future states matched representations in the middle frontal gyrus and superior temporal pole. Not surprisingly, the model motor layer representations matched the motor cortex. Collectively, these results constitute a proposal that the GOLSA model can provide an organizing account of how multiple brain regions interact to form essentially a negative feedback controller, with time varying behavioral set points derived from motivational states. Future work may investigate this proposal in more depth and compare against alternative models.

Model components

The GOLSA model is constructed from a small set of basic components, and the model code is freely available as supplementary material . The main component class is a layer of units, where each unit represents a neuron (or, more abstractly, a small subpopulation of neurons) corresponding to either a state, a state transition, or an action. The activity of units in a layer represents the neural firing rate and is instantiated as a vector updated according to a first order differential equation (c.f. Grossberg 72 ). The activation function varies between layers, but all units in a particular layer are governed by the same equation. The most typical activation function for a single unit is,

where a represents activation, i.e. the firing rate, of a model neuron. The four terms of this equation represent, in order: passive decay \(-\lambda a(t)\) , shunting excitation \(\left(1-a\left(t\right)\right)E\) , linear inhibition \(-I\) , and random noise \(\varepsilon N(t)\sqrt{dt}\) . “Shunting” refers to the fact that excitation (E) scales inversely as current activity increases, with a natural upper bound of 1. The passive decay works in a similar fashion, providing a natural lower bound activity of 0. The inhibition term linearly suppresses unit activity, while the final term adds normally distributed noise N (μ = 0, σ = 1), with strength \(\varepsilon\) . Because the differential equations are approximated using the Euler method, the noise term is multiplied by \(\sqrt{dt}\) to standardize the magnitude across different choices of dt 73 , 74 . The speed of activity change is determined by a time constant τ. The parameters τ, λ, ε vary by layer in order to implement different processes. E and I are the total excitation and inhibition, respectively, impinging on a particular unit for every presynaptic unit j in every projection p onto the target unit,

where \({a}_{{p}_{j}}\) is the activation of a presynaptic model neuron that provides exciation, and \({w}_{{p}_{j}}\) is the synaptic weight that determines how much excitation per unit of presynaptic activity will be provided to the postsynaptic model neuron.

A second activation function used in several places throughout the model is,

This function is identical to Eq. ( 1 ) except that the inhibition is also shunting, such that it exhibits a strong effect on highly active units and a smaller effect as unit activity approaches 0. While more typical in other models, shunting inhibition has a number of drawbacks in the current model. Two common uses for inhibition in the GOLSA model are winner-take-all dynamics and regulatory inhibition which resets layer activity. Shunting inhibition impedes both of these processes because inhibition fails to fully suppress the appropriate units, since it becomes less effective as unit activity decreases.

Projections

Layers connect to each other via projections, representing the synapses connecting one neural population to another. The primary component of projections is a weight matrix specifying the strength of connections between each pair of units. Learning is instantiated by updating the weights according to a learning function. These functions vary between the projections responsible for the model learning and are fully described in the section below dealing with each learning type. Some projections also maintain a matrix of traces updated by a projection-specific function of presynaptic or postsynaptic activity. The traces serve as a kind of short-term memory for which pre or postsynaptic units were recently activated, which serve a very similar role to eligibility traces as in Barto et al. 75 , though with a different mathematical form.

Nodes are model components that are not represented neurally via an activation function. They represent important control and timing signals to the model and are either set externally or update autonomously according to a function of time. For instance, sinusoidal oscillations are used to gate activity between various layers. While in principle rate-coded model neurons could implement a sinusoidal wave, the function is simply hard coded into the update function of the node for simplicity. In some cases, it is necessary for an entire layer to be strongly inhibited when particular conditions hold true, such as when an oscillatory node is in a particular phase. Layers therefore also have a list of inhibitor nodes that prevent unit activity within the layer when the node value meets certain conditions. In a similar fashion, some projections are gated by nodes such that they allow activity to pass through and/or allow the weights to be updated only when the relevant node activity satisfies a particular condition. Another important node provides strong inhibition to many layers when the agent changes states.

Environment

The agent operates in an environment consisting of discrete states, with a set of allowable state transitions. Allowable state transitions are not necessarily bidirectional, but for the present simulations, they are deterministic (unlike the typical MDP formulation used in RL). In some simulations, the environment also contains different types of reward located in various states, which can be used to drive goal selection. In other simulations, the goal is specified externally via a node value.

Complete network

Each component and subnetwork of the model is described in detail below or in the main text, but for reference and completeness a full diagram of the core network is shown in Fig.  5 A, and the network augmented for multi-step planning is shown in Fig.  5 B. Some of the basic layer properties are summarized in Table 2 . Layers and nodes are referred to using italics, such that the layer representing the current state is referred to simply as current-state.

Representational structure

In Fig.  5 B, the layers Goal, Goal Gradient, Next State, Adjacent States, Previous States, Simulated State, and Current State all have the same number of nodes and the same representational structure, i.e. one state per node.

The layers Desired Transition, Observed Transition, Transition Output, Queue Input, Queue Output, and Queue Store likewise have the same representational structure, which is the number of possible states squared. This allows a node in these layers to represent a transition from one specific state to another specific state.

The layers Action Input, Action Output, and Previous Action all have the same representational structure, which is one possible action per node.

Task description

The Treasure Hunt task (Fig.  2 ) was created and presented in OpenSesame, a Python-based toolbox for psychological task design 76 . In the task, participants control an agent which can move within a small environment comprised of four distinct states. The nominal setting is a farm, and the states are a field with a scarecrow, the lawn in front of the farm house, a stump with an axe, and a pasture with cows. Each is associated with a picture of the scene obtained from the internet. These states were chosen to exemplify categories previously shown to elicit a univariate response in different brain regions, namely faces, houses, tools, and animals 77 , 78 , 79 .

Over the course of the experiment, participants were told the locations of treasure chests and the keys needed to open them. By arriving at a chest with the key, participants earned points which were converted to a monetary bonus at the end of the experiment. The states were arranged in a square, where each state was accessible from the two adjacent states but not the state in the opposite corner (diagonal movement was not allowed).

Each trial began with the presentation of a text screen displaying the relevant information for the next trial, namely the locations of the participant, the key, and the chest (Fig.  2 ). Because the neural patterns elicited during the presentation were the primary target of the decoding analysis, it was important that visual information be as similar as possible across different goal configurations, to avoid potential confounds. To hold luminance as constant as possible across conditions, each line always had the same number of characters. Since, for instance, “Farm House: key” has fewer characters than “Farm House: Nothing”, filler characters were added to the shorter lines, namely Xs and Os. On some trials Xs were the filler characters on the top row and Os were the filler characters on the bottom rows. This manipulation allowed us to attempt to decode the relative position of the Xs and Os to test whether decoding could be achieved due only to character-level differences in the display. We found no evidence that our results reflect low level visual confounds such as the properties of the filler characters.

Participants were under no time constraint on the information screen and pressed a button when they were ready to continue. A delay screen then appeared consisting of four empty boxes. After a jittered interval (1-6 s, distributed exponentially), arrows appeared in the boxes. The arrows represented movement directions and the boxes corresponded to four buttons under the participants left middle finger, left index finger, right index finger, and right middle finger, from left to right. Participants pressed the button corresponding to the box with the arrow pointing in the desired direction to initiate a movement. A fixation cross then appeared for another jittered delay of 0–4 s, followed by a 2 s display of the newly reached location if their choice was correct or an error screen if it was incorrect.

If the participant did not yet have the key required to open the chest, the correct movement was always to the key. Sometimes the key and chest were in the same location in which case the participant would earn points immediately. If they were in different locations, then on the next trial the participant had to move to the chest. This structure facilitated a mix of goal distances (one and two states away) while controlling the route required to navigate to the goal.

If the chosen direction was incorrect, participants saw an error screen displaying text and a map of the environment. Participants advanced from this screen with a button press and then restarted the failed trial. If the failed trial was the second step in a two-step sequence (i.e., if they had already gotten the key and then moved to the wrong state to get to the chest), they had to repeat the previous two trials.

Repeating the failed trial ensured that there were balanced numbers of each class of event for decoding, since an incorrect response indicated that some information was not properly maintained or utilized. For example, if a participant failed the second step of a two-trial sequence, then they may not have properly encoded the final goal when first presented with the information screen on the previous trial, which specified the location of the key and the chest.

Halfway through the experiment, the map was reconfigured such that states were swapped across the diagonal axes of the map. This was necessary because otherwise, each state could be reached by exactly two movement directions and exactly two movement directions could be made from it. For instance, if the farm house was the state in the lower left, the farmhouse could only be reached by moving left or down from adjacent states, and participants starting at the farm house could only move up or to the right. If this were true across the entire experiment, above-chance classification of target state, for instance, could appear in regions that in fact only contain information about the intended movement direction.

Each state was the starting state for one quarter of the trials and the target destination for a different quarter of the trials. All trials were one of three types. One category consisted of single-trial (single) sequences in which the chest and key were in the same location. The sequences in which the chest and key were in separate locations required two trials to complete, one to move from the initial starting location to the key and another to move from the key location to the chest location. These two steps formed the other two classes of trials, the first-of-two (first) and second-of-two (second) trials. Recall that on second trials, no information other than the participant’s current location is presented on the starting screen to ensure that the participant maintained the location of the chest in memory across the entire two-trial sequence (if it was presented on the second trial, there would be no need to maintain that information through the first trial). The trials were evenly divided into single, first, and second classes with 64 trials in each class. Therefore, every trial had a starting state and an immediate goal, while one third of trials also had a more distant final goal.

Immediately prior to participating in the fMRI version of the task, participants completed a short 16-trial practice outside the scanner to refresh their memory. Before beginning the first run inside the scanner, participants saw a map of the farm states and indicated when they had memorized it before moving on. Within each run, participants completed as many trials as they could within eight minutes. As described above, exactly halfway through the trials, the state space was rearranged with each state moving to the opposite corner. Therefore, when participants completed the first half of the experiment, the current run was terminated and participants were given time to learn the new state space before scanning resumed. At the end of the experiment, participants filled out a short survey about their strategy.

Participants

In total, 49 participants (28 female) completed the behavioral-only portion of the experiment, including during task piloting (early versions of the behavioral task were slightly different than described below). Participants provided written informed consent in accordance with the Institutional Review Board at Indiana University, and were compensated $10/hour for their time plus a performance bonus based on accuracy up to an additional $10. The behavioral task first served as a pilot during task design and then as a pre-screen for the fMRI portion, in that only participants with at least 90% accuracy were invited to participate. Additional criteria for scanning were that the subjects be right handed, free of metal implants, free of claustrophobia, weigh less than 440 pounds, and not be currently taking psychoactive medication. In total, 25 participants participated in the fMRI task but one subject withdrew shortly after beginning, leaving 24 subjects who completed the imaging task (14 female). Across the 24 subjects, the average error rate of responses during the fMRI task was 2.4%, and error trials were modeled separately in the fMRI analysis. These were not analyzed further as there were too few error trials for a meaningful analysis.

fMRI acquisition and data preprocessing

Imaging data were collected on a Siemens Magnetom Trio 3.0-Tesla MRI scanner and a 32 channel head coil. Foam padding was inserted around the sides of the head to increase participant comfort and reduce head motion. Functional T2* weighted images were acquired using a multiband EPI sequence 80 with 42 contiguous slices and 3.44 × 3.44 × 3.4 mm 3 voxels (echo time = 28 ms; flip angle = 60; field of view = 220, multiband acceleration factor = 3). For the first subject, the TR was 813 ms, but during data collection for the second subject the TR changed to 816 ms for unknown reasons. The scanner was upgraded after collecting data from an additional five subjects, at which point the TR remained constant at 832 ms. All other parameters remained unchanged. High-resolution T 1 –weighted MPRAGE images were collected for spatial normalization (256 × 256 × 160 matrix of 1 × 1 × 1mm 3 voxels, TR = 1800, echo time = 2.56 ms; flip angle = 9).

Functional data were spike-corrected using AFNI’s 3dDespike ( http://afni.nimh.nih.gov/afni ). Functional images were corrected for difference in slice timing using sinc-interpolation and head movement using a least-squares approach with a 6-parameter rigid body spatial transformation. For subjects who moved more than 3 mm total or 0.5 mm between TRs, 24 motion regressors were added to subsequent GLM analyses 81 .

Because MVPA and representation similarity analysis (RSA) rely on precise voxelwise patterns, these analyses were performed before spatial normalization. For the univariate analyses, structural data were coregistered to the functional data and segmented into gray and white matter probability maps 82 . These segmented images were used to calculate spatial normalization parameters to the MNI template, which were subsequently applied to the functional data. As part of spatial normalization, the data were resampled to 2 × 2 × 2mm 3 , and this upsampling allowed maximum preservation of information. All analyses included a temporal high-pass filter (128 s) and correction for temporal autocorrelation using an autoregressive AR(1) model.

Univariate GLM

For initial univariate analyses, we measured the neural response associated with each outcome state at the outcome screen (when an image of the state was displayed), as well as the signal at the start of the trial associated with each immediate goal location. Five timepoints were modeled in the GLM used in this analysis, namely the start of the trial, the button press to advance, the appearance of the arrows and subsequent response, the start of the feedback, and the end of the feedback. The regressors marking the start of the trial and the start of the feedback screen were further individuated by the immediate goal on the trial. A separate error regressor was used when the response was incorrect, meaning they did not properly pursue the immediate goal and received error feedback. All correct trials in which participants moved to, for instance, the cow field, used the same trial start and feedback start regressors.

The GLM was fit to the normalized functional images. The resulting beta maps were combined at the second level with a voxel-wise threshold of p  < 0.001 and cluster corrected ( p  < 0.05) to control for multiple comparisons. We assessed the univariate response associated with each outcome location, by contrasting each particular outcome location with all other outcome locations. The response to the error feedback screen was assessed in a separate contrast against all correct outcomes. To test for any univariate responses related to the immediate goal, we performed an analogous analysis using the trial start regressors which were individuated based on the immediate goal. For example, the regressor ‘trialStartHouseNext’ was associated with the beginning of every trial where the farmhouse was the immediate goal location. To assess the univariate signal associated with the farmhouse immediate goal, we performed a contrast between this regressor and all other trial start regressors.

Representational similarity analysis (RSA)

As before, a GLM was fit to the realigned functional images. The following events were modeled with impulse regressors: trial onset (information screen), key press to advance to the decision screen, the prompt and immediately subsequent action (modeled as a single regressor), the onset of the outcome screen, and the termination of the outcome screen. The RSA analysis used beta maps derived from the regressors marking trial onset, prompt/response, and outcome screen onset.

Each of these regressors (except those used in error trials) were further individuated by the (start state, next state, final goal) triple constituting the goal path. There were 8 distinct trial types starting in each state. Each state could serve as the starting point of two single-step sequences (in which the key and treasure chest are in the same location) and four two-step sequences (in which the key and treasure chest are in different locations). Each state could also be the midpoint of a two-step sequence with the treasure chest located in one of two adjacent states. With three regressors used for each trial, there were 4 starting states * 8 trial types * 3 time points = 96 total patterns used to create the Representational Dissimilarity Matrix (RDM) in each searchlight region, where cell x ij in the RDM is defined as one minus the Pearson correlation between the ith and jth patterns. Values close to 2 therefore represent negative correlation (high representational distance) while values close to 0 indicate a positive correlation (low representational distance).

To derive the model-based RDMs, the GOLSA model was run on an analogue of the goal pursuit task, using a four state state-space with four actions corresponding to movement in each cardinal direction. The model layer timecourses of activity are shown in Figs.  6 and 7 for one- and two-step trials, respectively. The base GOLSA model is not capable of maintaining a plan across an arbitrary delay, but instead acts immediately to make the necessary state transitions. The competitive queue 83 module allows state transition sequences to be maintained and executed after a delay, and was therefore necessary to model the task in the most accurate manner possible. However, the goal-learning module was not necessary since goals were externally imposed. Because participants had to demonstrate high performance on the task before entering the scanner, little if any learning took place during the experiment. As a result, the model was trained extensively on the state space before performing any trials used in data collection. To further simulate likely patterns of activity in the absence of significant learning, the input from state to goal-gradient (used in the learning phase of an oscillatory cycle) was removed and the goal-gradient received steady input from the goal layer, interrupted only by the state-change inhibition signal. In other words, the goal-gradient layer continuously represented the actual goal gradient rather than shifting into learning mode half of the time.

figure 6

Model activity during a simulated one-step sequence of the Treasure Hunt task. The competitive queueing module first loads a plan and then executes it sequentially. State activity shows that the agent remains in state 1 for the first half of the simulation, while simulated-state (StateSim) shows the state transition the agent simulates as it forms its plan. Adjacent-states (Adjacent) receives input from stateSim which, along with goal-gradient (Gradient) activity determines the desired next state and therefore the appropriate transition to make. The plan is kept in queue-store (Store) which receives a burst of input from queue-input (QueueIn) and finally executes the plan by sending output to queue-output (QueueOut) which drives the motor system. The vertical dashed lines indicating the different phases of the simulation used in the creation of the model RDMs. For each layer, activity within each period was averaged across time to form a single vector representing the average pattern for that time period in the trial type being simulated. The bounds of each phase were determined qualitatively. The planning period is longer than the acting and outcome periods because the model takes longer to form a plan than execute it or observe the outcome.

figure 7

Model activity during a simulated two-step sequence of the Treasure Hunt task. The competitive queueing module first loads a plan and then executes it sequentially. State activity shows that the agent remains in state 1 for the first half of the simulation, while simulated-state shows the state transitions the agent simulates as it forms its plan. Adjacent-states receives input from simulated-state which, along with goal-gradient activity determines the desired next state and therefore the appropriate transitions to make. The plan is kept in queue-store which receives bursts of input from queue-input and finally executes the plan by sequentially sending output to queue-output which drives the motor system. To force the agent to go to the appropriate intermediate state, goal activity first reflects the key location and then the chest location. The vertical dashed lines indicate time periods used when creating the RDMs for the two-step sequence simulations. The first three time periods correspond to the first trial in the sequence while the latter three correspond to the second trial in the sequence. Again, the first planning period is much longer due to the nature of the model dynamics. During the second “planning” period (P2), the plan was already formed as must have been the case in the actual experiment since on the second trials in a two-step sequence, no information was presented at the start of the trial and had to be remembered from the previous trial.

In the task, participants first saw an information screen from which they could determine the immediate goal state and the appropriate next action. This plan was maintained over a delay before being implemented. At the beginning of each trial simulation, the queuing module was set to “load” while the model interactions determined the best method of getting from the current state to the goal state. This period is analogous to the period in which subjects look at the starting information screen and plan their next move. Then, the queuing module was set to “execute,” modeling the period in which participants are prompted to make their selection. Finally, the chosen action implements a state transition and the environment provides new state information to the state layer, modeling the outcome phase of the experiment.

Some pairs of trials in the task comprised a two-step sequence in which the final goal was initially two states away from the starting state. On the second trial of such sequences, participants were not provided any information on the information screen at the start of the trial, ensuring that they had encoded and maintained all goal-related information from the information screen presented at the start at the first trial in the sequence. These pairs of trials were modeled within a single GOLSA simulation. The model seeks the quickest path to the goal, identifying immediately available subgoals as needed. However, in the task, the location of the key necessitated a specific path to reach the final goal of the treasure chest. To provide these instructions to the model at the start of a two-step simulation, the goal representation from the subgoal (the key) was provided to the model first until the appropriate action was loaded and then the goal representation shifted to the final goal (the chest). Once the full two-step state transition sequence was loaded in the queue, the actions were read out sequentially, as shown in Fig.  7 .

A separate RDM was generated for each model layer. Patterns were extracted from three time intervals per action (six total for the two-step sequence simulations). Due of the time required to load the queue, the first planning period was longer than all other intervals. For each simulation and time point, the patterns of activity across the units were averaged over time, yielding one vector. Each trial type was repeated 10 times and the patterns generated in the previous step were averaged across simulation repetitions. The activity of each layer was thus summarized with at most 96 patterns of activity which were converted into an RDM by taking one minus the Pearson correlation between each pattern. Patterns in which all units were 0 were ignored since the correlation is undefined for constant vectors.

We looked for neural regions corresponding to the layers that played a critical role in the model during the acting phase in the typical learning oscillation since in these simulations the learning phase of the oscillation was disabled. We created RDMs from the following layers: current-state, adjacent-states, goal, goal-gradient, next-desired-state, desired-transition, action-out, simulated-state, and queue-store. As a control, we also added a layer component which generated normally distributed noise (μ = 1, σ = 1).

RSA searchlight

The searchlight analysis was conducted using Representational Similarity Analysis Toolbox, developed at the University of Cambridge ( http://www.mrc-cbu.cam.ac.uk/methods-and-resources/toolboxes/license/ ). For each of these layer RDM, a searchlight of radius of 10 mm was moved through the entire brain. At each voxel, an RDM was created by from the patterns in the spherical region centered on that voxel.

An r value was obtained for each voxel by computing the Spearman correlation between the searchlight RDM and the model layer RDM, ignoring trial time periods in which all model units showed no activity. A full pass of the searchlight over the brain produced a whole-brain r map for each subject for each layer. Voxels in regions that perform a similar function to the model component will produce similar RDMs to the model component RDM and thus will be assigned relatively high values. The r maps were then Fisher-transformed into z maps ( \(z=\frac{1}{2}\mathrm{ln}\left(\frac{1+r}{1-r}\right)\) ). The z maps were normalized into the MNI template but were not smoothed, as the searchlight method already introduces substantial smoothing. Second level effects were assessed with a t test on the normalized z maps, with a cluster defining threshold of p  < 0.001, cluster corrected to p  < 0.05 overall. The cluster significance was determined by SPM5 and verified for clusters >  = 24 voxels in size with a version of 3DClustSim (compile date Jan. 11, 2017) that corrects for the alpha inflation found in pervious 3DClustSim versions 84 . The complete results are shown in Table 1 .

Data availability

The GOLSA model code for the simulations is available at https://github.com/CogControl/GolsaOrigTreasureHunt . Imaging data are available from the corresponding author on reasonable request.

Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518 , 529–533 (2015).

Article   ADS   CAS   Google Scholar  

Silver, D. et al. Mastering the game of Go without human knowledge. Nature 550 , 354–359 (2017).

Palm, G. & Schwenker, F. Artificial development by reinforcement learning can benefit from multiple motivations. Front. Robot. AI 6 , 6 (2019).

Article   Google Scholar  

Adams, S. et al. Mapping the landscape of human-level artificial general intelligence. AI Mag. 33 , 25 (2012).

Google Scholar  

Jonas, E. & Kording, K. Could a neuroscientist understand a microprocessor? http://biorxiv.org/lookup/doi/ https://doi.org/10.1101/055624 (2016) doi: https://doi.org/10.1101/055624 .

Brown, J. W. The tale of the neuroscientists and the computer: Why mechanistic theory matters. Front. Neurosci. 8 , (2014).

Wilson, R. C., Takahashi, Y. K., Schoenbaum, G. & Niv, Y. Orbitofrontal cortex as a cognitive map of task space. Neuron 81 , 267–279 (2014).

Article   CAS   Google Scholar  

Schoenbaum, G., Takahashi, Y., Liu, T.-L. & McDannald, M. A. Does the orbitofrontal cortex signal value?. Ann. N. Y. Acad. Sci. 1239 , 87–99 (2011).

Article   ADS   Google Scholar  

Whyte, A. J. et al. Reward-related expectations trigger dendritic spine plasticity in the mouse ventrolateral orbitofrontal cortex. J. Neurosci. https://doi.org/10.1523/JNEUROSCI.2031-18.2019 (2019).

Vikbladh, O. M. et al. Hippocampal contributions to model-based planning and spatial memory. Neuron 102 , 683-693.e4 (2019).

Buckner, R. L. The role of the hippocampus in prediction and imagination. Annu. Rev. Psychol. 61 , 27–48 (2010).

Cools, A. R. Role of the neostriatal dopaminergic activity in sequencing and selecting behavioural strategies: Facilitation of processes involved in selecting the best strategy in a stressful situation. Behav. Brain Res. 1 , 361–378 (1980).

Nee, D. E. & Brown, J. W. Rostral-caudal gradients of abstraction revealed by multi-variate pattern analysis of working memory. Neuroimage 63 , 1285–1294 (2012).

Riggall, A. C. & Postle, B. R. The relationship between working memory storage and elevated activity as measured with functional magnetic resonance imaging. J. Neurosci. 32 , 12990–12998 (2012).

Miller, E. K. & Cohen, J. D. An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci. 21 , 167–202 (2001).

Donoso, M., Collins, A. G. E. & Koechlin, E. Human cognition. Foundations of human reasoning in the prefrontal cortex. Science 344 , 1481–1486 (2014).

Aron, A. R. The neural basis of inhibition in cognitive control. Neurosci. 13 , 214–228 (2007).

Alexander, W. H. & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nat. Neurosci. 14 , 1338–1344 (2012).

Brown, J. W. & Alexander, W. H. Foraging value, risk avoidance, and multiple control signals: How the anterior cingulate cortex controls value-based decision-making. J. Cogn. Neurosci. 29 , 1656–1673 (2017).

Cooper, R. & Shallice, T. Contention scheduling and the control of routine activities. Cogn. Neuropsychol. 17 , 297–338 (2000).

Leung, J., Shen, Z., Zeng, Z. & Miao, C. Goal Modelling for Deep Reinforcement Learning Agents. in 271–286 (2021). doi: https://doi.org/10.1007/978-3-030-86486-6_17 .

Todorov, E. & Jordan, M. I. Optimal feedback control as a theory of motor coordination. Nat. Neurosci. 5 , 1226–1235 (2002).

Yan, G. et al. Network control principles predict neuron function in the Caenorhabditis elegans connectome. Nature 550 , 519–523 (2017).

Gu, S. et al. Optimal trajectories of brain state transitions. Neuroimage 148 , 305–317 (2017).

Stiso, J. et al. White matter network architecture guides direct electrical stimulation through optimal state transitions. Cell Rep. 28 , 2554-2566.e7 (2019).

Golub, M. D. et al. Learning by neural reassociation. Nat. Neurosci. 21 , 607–616 (2018).

Powers, W. T. Quantitative analysis of purposive systems: Some spadework at the foundations of scientific psychology. Psychol. Rev. 85 , 417–435 (1978).

Marken, R. S. & Mansell, W. Perceptual control as a unifying concept in psychology. Rev. Gen. Psychol. 17 , 190–195 (2013).

Juechems, K. & Summerfield, C. Where does value come from?. PsyArxiv https://doi.org/10.31234/osf.io/rxf7e (2019).

Carroll, T. J., McNamee, D., Ingram, J. N. & Wolpert, D. M. Rapid visuomotor responses reflect value-based decisions. J. Neurosci. https://doi.org/10.1523/JNEUROSCI.1934-18.2019 (2019).

Hart, P., Nilsson, N. & Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 4 , 100–107 (1968).

Dijkstra, E. W. A note on two problems in connexion with graphs. Numer. Math. 1 , 269–271 (1959).

Article   MATH   Google Scholar  

Silver, D. et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362 , 1140–1144 (2018).

Article   ADS   CAS   MATH   Google Scholar  

Fine, J. M., Zarr, N. & Brown, J. W. Computational neural mechanisms of goal-directed planning and problem solving. Comput. Brain Behav. 3 , 472–493 (2020).

Martinet, L.-E., Sheynikhovich, D., Benchenane, K. & Arleo, A. Spatial learning and action planning in a prefrontal cortical network model. PLoS Comput. Biol. 7 , e1002045 (2011).

Ivey, R., Bullock, D. & Grossberg, S. A neuromorphic model of spatial lookahead planning. Neural Netw. 24 , 257–266 (2011).

Knoblock, C. A. Abstracting the tower of Hanoi. Work. Notes AAAI-90 Work. Autom. Gener. Approx. Abstr. 1–11 (1990).

Kriegeskorte, N. Representational similarity analysis–connecting the branches of systems neuroscience. Front. Syst. Neurosci. https://doi.org/10.3389/neuro.06.004.2008 (2008).

Averbeck, B. B., Chafee, M. V., Crowe, D. A. & Georgopoulos, A. P. Parallel processing of serial movements in prefrontal cortex. Proc. Natl. Acad. Sci. USA 99 , 13172–13177 (2002).

Rhodes, B. J., Bullock, D., Verwey, W. B., Averbeck, B. B. & Page, M. P. Learning and production of movement sequences: Behavioral, neurophysiological, and modeling perspectives. Hum. Mov. Sci. 23 , 699–746 (2004).

Gilbert, D. T. & Wilson, T. D. Prospection: experiencing the future. Science 317 , 1351–1354 (2007).

Buckner, R. L. & Carroll, D. C. Self-projection and the brain. Trends Cogn. Sci. 11 , 49–57 (2007).

Rao, R. P. N. & Ballard, D. H. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2 , 79–87 (1999).

Kriegeskorte, N., Formisano, E., Sorger, B. & Goebel, R. Individual faces elicit distinct response patterns in human anterior temporal cortex. Proc. Natl. Acad Sci. USA 104 , 20600–20605 (2007).

Guest, O. & Love, B. C. What the success of brain imaging implies about the neural code. Elife 6 , (2017).

Tzourio-Mazoyer, N. et al. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15 , 273–289 (2002).

Baldassarre, G. et al. Intrinsically motivated action-outcome learning and goal-based action recall: A system-level bio-constrained computational model. Neural Netw. 41 , 168–187 (2013).

Barto, A., Sutton, R. & Anderson, C. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. Syst. Man. Cybern. 5 , 834–846 (1983).

Reynolds, J. H. & Heeger, D. J. The normalization model of attention. Neuron 61 , 168–185 (2009).

Banino, A. et al. Vector-based navigation using grid-like representations in artificial agents. Nature 557 , 429–433 (2018).

Brown, J. W., Bullock, D. & Grossberg, S. How laminar frontal cortex and basal ganglia circuits interact to control planned and reactive saccades. Neural Netw. 17 , 471–510 (2004).

Niki, H. & Watanabe, M. Prefrontal and cingulate unit activity during timing behavior in the monkey. Brain Res. 171 , 213–224 (1979).

Friston, K. The free-energy principle: A unified brain theory?. Nat. Rev. Neurosci. 11 , 127–138 (2010).

Friston, K., Mattout, J. & Kilner, J. Action understanding and active inference. Biol. Cybern. 104 , 137–160 (2011).

Alexander, W. H. & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nat. Neurosci. 14 , 1338–1344 (2011).

Alexander, W. H. & Brown, J. W. A general role for medial prefrontal cortex in event prediction. Front. Comput. Neurosci. 8 , (2014).

Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503 , 78–84 (2013).

Khaligh-Razavi, S.-M. & Kriegeskorte, N. Deep supervised, but not unsupervised, models may explain it cortical representation. PLoS Comput. Biol. 10 , e1003915 (2014).

Wen, H. et al. Neural encoding and decoding with deep learning for dynamic natural vision. Cereb. Cortex 28 , 4136–4160 (2018).

Williamson, R. C., Doiron, B., Smith, M. A. & Yu, B. M. Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction. Curr. Opin. Neurobiol. 55 , 40–47 (2019).

Yang, G. R., Joglekar, M. R., Song, H. F., Newsome, W. T. & Wang, X.-J. Task representations in neural networks trained to perform many cognitive tasks. Nat. Neurosci. 22 , 297–306 (2019).

Watkins, C. J. C. H. & Dayan, P. Q-learning. Mach. Learn. 8 , 279–292 (1992).

Pfeiffer, B. E. & Foster, D. J. Hippocampal place-cell sequences depict future paths to remembered goals. Nature 497 , 74–79 (2013).

Van der Meer, M. A. & Redish, A. D. Expectancies in decision making, reinforcement learning, and ventral striatum. Front. Neurosci. 4 , 29–37 (2010).

Platt, J. R. Strong inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others. Science 146 , 347–353 (1964).

Glasius, R., Komoda, A. & Gielen, S. C. A. M. Neural network dynamics for path planning and obstacle avoidance. Neural Netw. 8 , 125–133 (1995).

Alexander, W. H. & Brown, J. W. Frontal cortex function as derived from hierarchical predictive coding. Sci. Rep. 8 , 3843 (2018).

Badre, D. & D’Esposito, M. Is the rostro-caudal axis of the frontal lobe hierarchical?. Nat. Rev. Neurosci. 10 , 659–669 (2009).

Cooper, R. P. & Shallice, T. Hierarchical schemas and goals in the control of sequential behavior. Psychol. Rev. 113 , 887–916 (2006).

Dolan, R. J. & Dayan, P. Goals and Habits in the Brain. Neuron 80 , 312–325 (2013).

Moors, A. & De Houwer, J. Automaticity: A theoretical and conceptual analysis. Psychol. Bull. 132 , 297–326 (2006).

Grossberg, S. Contour enhancement, short term memory, and constancies in reverberating neural networks. Stud. Appl. Math. 52 , 213–257 (1973).

Busemeyer, J. R. & Townsend, J. T. Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychol. Rev. https://doi.org/10.1037/0033-295X.100.3.432 (1993).

Usher, M. & McClelland, J. L. The time course of perceptual choice: The leaky, competing accumulator model. Psychol. Rev. https://doi.org/10.1037/0033-295X.108.3.550 (2001).

Barto, A. G., Sutton, R. S. & Brouwer, P. S. Associative search network: A reinforcement learning associative memory. Biol. Cybern. 40 , 201–211 (1979).

Mathôt, S., Schreij, D. & Theeuwes, J. OpenSesame: An open-source, graphical experiment builder for the social sciences. Behav. Res. Methods 44 , 314–324 (2012).

Kanwisher, N., McDermott, J. & Chun, M. M. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17 , 4302–4311 (1997).

O’Craven, K. M., Downing, P. E. & Kanwisher, N. fMRI evidence for objects as the units of attentional selection. Nature 401 , 584–587 (1999).

Anzellotti, S., Mahon, B. Z., Schwarzbach, J. & Caramazza, A. Differential activity for animals and manipulable objects in the anterior temporal lobes. J. Cogn. Neurosci. 23 , 2059–2067 (2011).

Moeller, S. et al. Multiband multislice GE-EPI at 7 tesla, with 16-fold acceleration using partial parallel imaging with application to high spatial and temporal whole-brain FMRI. Magn. Reson. Med. 63 , 1144–1153 (2010).

Nee, D. E. & D’Esposito, M. The hierarchical organization of the lateral prefrontal cortex. Elife 5 , (2016).

Ashburner, J. & Friston, K. Multimodal image coregistration and partitioning-a unified framework. Neuroimage 6 , 209–217 (1997).

Bullock, D. Adaptive neural models of queuing and timing in fluent action. Trends Cogn. Sci. 8 , 426–433 (2004).

Eklund, A., Nichols, T. E. & Knutsson, H. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proc. Natl. Acad. Sci. https://doi.org/10.1073/pnas.1602413113 (2016).

Download references

Acknowledgements

We thank A. Ramamoorthy for helpful discussions and J. Fine and W. Alexander for helpful comments on the manuscript. Supported by the Indiana University Imaging Research Facility. JWB was supported by NIH R21 DA040773.

Author information

Authors and affiliations.

Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA

Noah Zarr & Joshua W. Brown

You can also search for this author in PubMed   Google Scholar

Contributions

J.W.B. and N.Z. designed the model and experiment. N.Z. implemented and simulated the model, implemented and ran the fMRI experiment, and analyzed the data. J.W.B. and N.Z. wrote the paper.

Corresponding author

Correspondence to Joshua W. Brown .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zarr, N., Brown, J.W. Foundations of human spatial problem solving. Sci Rep 13 , 1485 (2023). https://doi.org/10.1038/s41598-023-28834-3

Download citation

Received : 09 September 2022

Accepted : 25 January 2023

Published : 27 January 2023

DOI : https://doi.org/10.1038/s41598-023-28834-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

spatial intelligence problem solving

  • Introduction
  • Open access
  • Published: 23 March 2021

Why spatial is special in education, learning, and everyday activities

  • Toru Ishikawa 1 &
  • Nora S. Newcombe 2  

Cognitive Research: Principles and Implications volume  6 , Article number:  20 ( 2021 ) Cite this article

15k Accesses

10 Citations

11 Altmetric

Metrics details

The structure of human intellect can be conceptualized as consisting of three broad but correlated domains: verbal ability, numerical ability, and spatial ability (Wai et al. 2009 ). Verbal and numerical abilities are traditionally emphasized in the classroom context, as the phrase "the three Rs" (reading, writing, and arithmetic) suggests. However, research has increasingly demonstrated that spatial ability also plays an important role in academic achievement, especially in learning STEM (science, technology, engineering, and mathematics) (National Research Council 2006 ; Newcombe 2010 ). For example, envisioning the shape or movement of an imagined object contributes to the understanding of intersections of solids in calculus, structures of molecules in chemistry, and the formation of landscapes in geology.

Spatial thinking is a broader topic than spatial ability, however (Hegarty 2010 ). We use symbolic spatial tools, such as graphs, maps, and diagrams, in both educational and everyday contexts. These tools significantly enhance human reasoning, for example, graphs are a powerful tool to show the relationship among a set of variables in two (or higher) dimensions. STEM disciplines use these tools frequently, and, in addition, often have specific representations that students need to master, such as block diagrams in geology. Although teachers may assume that these representations are easy to read, maps, diagrams and graphs often pose difficulty for students, especially those with low spatial ability (e.g., a graph that shows changes in an object's velocity according to time) (Kozhevnikov et al. 2007 ).

As well as understanding spatial representations that are provided by teachers or in textbooks, good spatial thinkers can choose or even create representations that are suitable for the task at hand. Novices tend to prefer representations that are realistic and detailed, often more realistic and detailed than necessary because they include irrelevant information (Hegarty 2010 ; Tversky and Morrison 2002 ). Being good at spatial thinking entails the ability to select and create appropriate spatial representations, based on sound knowledge of content in a specific domain.

Navigation is a special kind of spatial thinking, which requires us to understand our location (where we are) and orientation (which direction we are facing) in relation to the surroundings. Sometimes, we may construct reasonably accurate mental representations of the environment ("maps in the head" or "cognitive maps"). However, people often have difficulty with cognitive mapping (Ishikawa and Montello 2006 ; Weisberg and Newcombe 2016 ), especially in environmental space (beyond figural or vista space), when we cannot view a layout in its entirety from a single viewpoint (Ittelson 1973 ; Jacobs and Menzel 2014 ; Montello 1993 ). People thus need to move around and integrate separate pieces of information available at each viewpoint in a common frame of reference, which poses extra cognitive processing demands (Han and Becker 2014 ; Holmes et al. 2018 ; Meilinger et al. 2014 ). Spatial orientation and navigation may be problematic for some people even with maps or satellite navigation (Ishikawa 2019 ; Liben et al. 2002 ).

Characteristics of spatial thinking

Spatial thinking has unique characteristics that offer interesting research challenges. First, spatial thinking concerns space at different scales. Thinking about the structures of molecules, envisioning the folding and unfolding of a piece of paper, making a mechanical drawing, packing a suitcase, finding your way to a destination in a new environment, and reasoning about the formative process of a geologic structure all concern thinking and reasoning about space, but they span a wide range of spatial and temporal scales. Expertise in spatial thinking in STEM domains typically focuses on a specific scale, with organic chemistry, surgery, mechanical engineering, architecture, structural geology, and planetary science spanning but not exhausting the range. Spatial skills may vary across scale. For example, Hegarty et al. ( 2006 ) showed that learning from direct navigation in the environment differed from learning from a video or a desktop virtual environment, yielding two separate factors in factor analysis, and that the former was correlated with self-report sense of direction, whereas the latter with psychometrically assessed spatial ability. Learmonth et al. ( 2001 ) showed that young children's use of landmark information to reorient depends on the size of space.

Second, spatial thinking occurs in various media, including 2D static images, 3D animations, schematic diagrams, indoor and outdoor environments, immersive virtual environments, and spatial language. Each medium has its own way of representing spatial information (Liben 1999 ; Tversky 2001 ) and knowledge acquired from different media differs in structure and flexibility in important ways (Rieser 1989 ; Taylor and Tversky 1992 ; Thorndyke and Hayes-Roth 1982 ). In discussing spatial thinking and learning media, one should distinguish between internal representations (knowledge in the mind) and external representations (spatial products or expressions presented to a person). External spatial representations are shown visually in a certain level of detail or resolution (Goodchild and Proctor 1997 ), and verbally in a specific frame of reference (Levinson 1996 ).

Third, spatial thinking skills vary both at a group level and at the individual level. There are cases where group differences are of concern to the instructor, for example, in consideration of male–female differences in entry and retention rates in STEM disciplines (Belser et al. 2018 ; Chen 2013 ; Sithole et al. 2017 ). Instructors are also concerned with individual differences in aptitudes; for example, students vary in their spatial and verbal abilities and some students are good at spatial tasks and some are good at verbal tasks. Is there a good way to adjust instructional methods to students' aptitudes? Furthermore, given the existence of group and individual differences in spatial thinking, another question of concern is how instruction can have an impact, for example, whether male–female differences in spatial thinking, when they occur, can be eliminated, or how best people with difficulty in spatial thinking can improve, by training.

Papers in this special issue

The papers in this special issue center around three major topics: (a) spatial thinking and the skill of mental rotation; (b) spatial thinking in the classroom context or in STEM curricula; and (c) spatial thinking in wayfinding or large-scale spatial cognition. Here is a link to the papers ( https://cognitiveresearchjournal.springeropen.com/spatial-collection ) (Table 1 ).

Mental rotation

Mental rotation is one of the major spatial abilities assessed by psychometric spatial tests, and has been much studied. Importantly, it has been shown to correlate with success in a variety of other spatial thinking tasks. Intriguingly, it also shows large male–female differences in adults, although sex differences in other spatial skills tend to be smaller or even non-existent. Whether there are sex differences in mental rotation in children is a more controversial topic; sex differences may emerge over the course of development (Lauer et al. 2019 ; Newcombe 2020 ), but for an alternative, see Johnson & Moore’s paper in this special issue. There are also papers in the special issue investigating the malleability of mental rotation with practice (Moen et al.), and its relations with spatial anxiety (Alvarez-Vargas, Abad, & Pruden) and everyday experience (Cheng, Hegarty, & Chrastil). In an unexpected twist, it turns out that mental rotation may even be involved with tracking tasks and executing intended actions at specified times (Kubik, Del Messier, & Mantyla).

Spatial thinking in STEM

Spatial thinking, as discussed above, includes advanced disciplinary thinking of a spatial nature, based on expert knowledge and reasoning in each domain. Examples of such academic disciplines include structural geology, surgery, chemistry (Atit, Uttal, & Stieff), and mathematics (Aldugom, Fenn, & Cook). Despite the contribution of spatial thinking to a physical prediction task, however, spatial skills did not account for all of the individual differences observed in intuitive physics (Mitko & Fischer). Variation in spatial learning is already evident in early adolescence, as shown in a study of learning about plate tectonics using a computer visualization (Epler-Ruths, McDonald, Pallant, & Lee). The development of effective spatial instruction should consider how to bring scientific research into the educational practice of spatial thinking (Gagnier & Fisher) and how to support elementary school teachers who are liable to spatial anxiety (Burte, Gardony, Hutton, & Taylor).

Spatial thinking and navigation

Space at environmental scale, or navigational spatial thinking, is vital in everyday life for wayfinding in the environment. Issues of concern to researchers include spatial reasoning in different spatial frames of reference (Weisberg & Chatterjee), learning performance at different spatial scales (Zhao et al.), relationship with sense of direction (Zhao et al.; Stites, Matzen, & Gastelum), the possibility of improving cognitive mapping skills (Ishikawa & Zhou), and navigation in complex environments or emergent situations (Stites, Matzen, & Gastelum). Uncertainty in a novel environment prompts people to seek information, and a review of the literature suggests the importance of examining task behavior, not just the state of knowledge at the end of a navigation experience (Keller, Taylor, & Brunye). In the context of a discussion of the possibility of instructing spatial thinking, participation in spatial activities during childhood or adolescence and its relationship with spatial thinking has attracted the attention of researchers and practitioners (Peterson et al.). Sex differences in navigation may arise from girls and boys having different childhood wayfinding experiences (Vieites, Pruden, & Reeb-Sutherland).

Questions for further thinking about spatial thinking

Looking over the articles in the special issue as well as other recent studies suggests questions for further research into spatial thinking.

Spatial ability and spatial thinking

How does mental rotation relate to spatial thinking in various academic disciplines? The existing literature points to the malleability of the skill of mental rotation: given that mental rotation is an important component of spatial thinking, how can training in mental rotation improve (or transfer to) spatial thinking? Does the effect differ in different disciplines or for different types of spatial thinking in a specific discipline? What about examining other spatial abilities, such as perspective taking, spatial orientation, or flexibility of closure, in regard to their relations with spatial thinking of various kinds? Arguably, we have focused too much on mental rotation, and ignored other kinds of crucial mental operations.

Spatial thinking as a domain-specific learning skill

Researchers have studied spatial thinking in various STEM disciplines including geoscience, surgery, chemistry, and mathematics, and also in the K-12 setting and at the college level. Continued research into the types of spatial thinking that are required in disciplinary learning and characterize expert thinking in each domain would contribute to better theoretical understanding and educational practice. Specific questions include: How is STEM learning related to (explained or predicted by) facility with spatial thinking? Is spatial thinking different from spatial ability assessed by spatial tests? In a specific STEM discipline, what is the relationship among spatial thinking, spatial ability, and domain-specific knowledge? What is the contribution of spatial thinking, spatial ability, and domain-specific knowledge, respectively, to the mastery of each disciplinary learning? And, importantly, how can one develop curricula that effectively take scientific knowledge of spatial thinking into account to encourage students to pursue STEM careers?

Spatial thinking as it relates to our everyday activities

Space is s fundamental component to our cognition and behavior, as it surrounds us and affords us opportunities to function adaptively. Thinking in, about, and with space characterizes (or conditions) our everyday activities. Finding one's way in the environment (cognitive mapping), communicating information in graphs and diagrams (visualization), and using space to think about nonspatial phenomena (spatial metaphors or spatialization) are major examples of our everyday spatial thinking, to name but a few. How are these everyday spatial thinking skills acquired, and if possible, instructed? Can navigation and wayfinding skills be trained, or can people's "sense of direction" be improved by training? Does the participation in spatial activities affect spatial thinking? Does self-assessment of one's spatial thinking skills affect (promote or hinder) participation in spatial activities?

Investigation of these questions, in collaboration between researchers and practitioners, will deepen our understanding of what spatial thinking is and how it relates to our cognition and behavior. We hope that the special issue fosters more research along these lines and enhances scientific and pedagogical interest in this vital domain of human cognition.

Belser, C., Shillingford, A., & Daire, A. P. (2018). Factors influencing undergraduate student retention in STEM majors: Career development, math ability, and demographics. The Professional Counselor, 8, 262–276.

Article   Google Scholar  

Chen, X. (2013). STEM attrition: College students’ paths into and out of STEM fields (NCES 2014–001) . Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.

Google Scholar  

Goodchild, M. F., & Proctor, J. (1997). Scale in a digital geographic world. Geographical and Environmental Modelling, 1, 5–23.

Han, X., & Becker, S. (2014). One spatial map or many? Spatial coding of connected environments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 511–531.

PubMed   Google Scholar  

Hegarty, M. (2010). Components of spatial intelligence. Psychology of Learning and Motivation, 52, 265–297.

Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial-layout learning. Intelligence , 34 , 151–176.

Holmes, C. A., Newcombe, N. S., & Shipley, T. F. (2018). Move to learn: Integrating spatial information from multiple viewpoints. Cognition, 178, 7–25.

Ishikawa, T. (2019). Satellite navigation and geospatial awareness: Long-term effects of using navigation tools on wayfinding and spatial orientation. The Professional Geographer, 71, 197–209.

Ishikawa, T., & Montello, D. R. (2006). Spatial knowledge acquisition from direct experience in the environment: Individual differences in the development of metric knowledge and the integration of separately learned places. Cognitive Psychology, 52, 93–129.

Ittelson, W. H. (1973). Environment perception and contemporary perceptual theory. In W. H. Ittelson (Ed.), Environment and cognition (pp. 1–19). New York, NY: Seminar Press.

Jacobs, L. F., & Menzel, R. (2014). Navigation outside of the box: What the lab can learn from the field and what the field can learn from the lab. Movement Ecology, 2, 3.

Kozhevnikov, M., Motes, M. A., & Hegarty, M. (2007). Spatial visualization in physics problem solving. Cognitive Science, 31, 549–579.

Lauer, J. E., Yhang, E., & Lourenco, S. F. (2019). The development of gender differences in spatial reasoning: A meta-analytic review. Psychological Bulletin, 145 (6), 537–565.

Learmonth, A. E., Newcombe, N. S., & Huttenlocher, J. (2001). Toddlers’ use of metric information and landmarks to reorient. Journal of Experimental Child Psychology, 80, 225–244.

Levinson, S. C. (1996). Frames of reference and Molyneux’s question: Cross-linguistic evidence. In P. Bloom, M. A. Peterson, L. Nadel, & M. F. Garrett (Eds.), Language and space (pp. 109–169). Cambridge, MA: MIT Press.

Liben, L. S. (1999). Developing an understanding of external spatial representations. In I. E. Sigel (Ed.), Development of mental representation: Theories and applications (pp. 297–321). Mahwah, NJ: Erlbaum.

Liben, L. S., Kastens, K. A., & Stevenson, L. M. (2002). Real-world knowledge through real-world maps: A developmental guide for navigating the educational terrain. Developmental Review, 22, 267–322.

Meilinger, T., Riecke, B. E., & Bülthoff, H. H. (2014). Local and global reference frames for environmental spaces. Quarterly Journal of Experimental Psychology, 67, 542–569.

Montello, D. R. (1993). Scale and multiple psychologies of space. In A. U. Frank & I. Campari (Eds.), Spatial information theory (pp. 312–321). Berlin: Springer.

National Research Council. (2006). Learning to think spatially . Washington, DC: National Academies Press.

Newcombe, N. S. (2010). Picture this: Increasing math and science learning by improving spatial thinking. American Educator, 34 (2), 29–43.

Newcombe, N. S. (2020). The puzzle of spatial sex differences: Current status and prerequisites to solutions. Child Development Perspectives, 14 (4), 251–257.

Rieser, J. J. (1989). Access to knowledge of spatial structure at novel points of observation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 1157–1165.

Sithole, A., Chiyaka, E. T., McCarthy, P., Mupinga, D. M., Bucklein, B. K., & Kibirige, J. (2017). Student attraction, persistence and retention in STEM programs: Successes and continuing challenges. Higher Education Studies, 7 (1), 46–59.

Taylor, H. A., & Tversky, B. (1992). Spatial mental models derived from survey and route descriptions. Journal of Memory and language, 31, 261–292.

Thorndyke, P. W., & Hayes-Roth, B. (1982). Differences in spatial knowledge acquired from maps and navigation. Cognitive Psychology, 14, 560–589.

Tversky, B. (2001). Spatial schemas in depictions. In M. Gattis (Ed.), Spatial schemas and abstract thought (pp. 79–112). Cambridge, MA: MIT Press.

Tversky, B., & Morrison, J. B. (2002). Animation: Can it facilitate? International Journal of Human-Computer Studies, 57, 247–262.

Wai, J., Lubinski, D., & Benbow, C. P. (2009). Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. Journal of Educational Psychology, 101, 817–835.

Weisberg, S. M., & Newcombe, N. S. (2016). Why do (some) people make a cognitive map? Routes, places, and working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 768–785.

Download references

Author information

Authors and affiliations.

INIAD Toyo University, Tokyo, Japan

Toru Ishikawa

Temple University, Philadelphia, USA

Nora S. Newcombe

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Toru Ishikawa or Nora S. Newcombe .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ishikawa, T., Newcombe, N.S. Why spatial is special in education, learning, and everyday activities. Cogn. Research 6 , 20 (2021). https://doi.org/10.1186/s41235-021-00274-5

Download citation

Published : 23 March 2021

DOI : https://doi.org/10.1186/s41235-021-00274-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

spatial intelligence problem solving

spatial intelligence problem solving

AI-driven geospatial workflows

Discover how organizations are building a more resilient future with accelerated spatial problem-solving

Satellite image of land with extracted buildings colored red and green land and trees

What is GeoAI?

Geospatial artificial intelligence (GeoAI) is the application of artificial intelligence (AI) fused with geospatial data, science, and technology to accelerate real-world understanding of business opportunities, environmental impacts, and operational risks. Organizations are modernizing operations to run at scale through automated data generation and approachable spatial tools and algorithms. 

Extract rich geospatial data with deep learning

Save time by automating the extraction, classification, and detection of information from data such as imagery, video, point clouds, and text.

Perform predictive analysis using machine learning

Build more accurate models. Detect clusters, calculate change, find patterns, and forecast outcomes with spatial algorithms backed by experts.

Aerial image of buildings, homes, and green trees along a coastline and blue ocean

Model the real world for prediction

Aerial imagery is used to extract imagery of buildings and roads in Grenada to identify the population and infrastructure at risk for landslides.

Why is GeoAI important?

GeoAI is transforming the speed at which we extract meaning from complex datasets, thereby aiding us in addressing the earth’s most pressing challenges. It reveals and helps us perceive intricate patterns and relationships in a variety of data that continues to grow exponentially. Organizations leveraging GeoAI are revolutionizing how they turn data into information, with models that adapt even as data evolves. 

Improve data quality, consistency, and accuracy

Streamline manual data generation workflows by using the power of automation to increase efficiency and reduce costs.

Accelerate the time to situational awareness

Monitor and analyze events, assets, and entities from sensors and sources such as video to enable quicker response times and proactive decisions.

Bring location intelligence to decision-making

Make data-driven decisions with real-world awareness. Improve business outcomes with insight from spatial patterns and accurate predictions.

Aerial image of a landscape that includes a field and hills with green trees, ponds, and roads

Create a sustainable future

Optimize resource management and understand the impact of business decisions on the community to reduce waste and better plan and manage sites.

How is GeoAI used?

GeoAI is used in various industries and applications to tackle challenges and proactively seize opportunities. Explore how GeoAI is used to optimize crop yields, heighten community safety, streamline asset inspection, shorten emergency response times, and more.

State and local government

GeoAI is accelerating the speed at which government officials better serve communities using data. By leveraging GeoAI, governments can model the impacts of urban development, understand the availability of resources to the population, forecast road and infrastructure deterioration, and identify land-use change (such as new buildings) to proactively take action.

Natural resources

GeoAI is revolutionizing the precision agriculture market by aiding the automated detection of invasive species. It helps the oil and gas industry monitor assets through automated extraction of flares, new well pads, or field access roads. Foresters and landowners use GeoAI to give them knowledge about the volumes and species of trees without a time-consuming on-site inspection. 

National mapping and statistics

GeoAI is enhancing the responsiveness, productivity, and speed of product delivery for national mapping agencies. Through automation, these organizations are scaling their internal capacities and production workflows. A national mapping department can quickly update a nation's geographic information system (GIS) in hours, not months or days.

Defense and intelligence

GeoAI is speeding up how organizations extract information, identify patterns, and determine changes in big data. An intelligence organization can support its activity-based intelligence efforts by automating how they analyze information related to events, entities, surveillance video, and remotely sensed data.

Public safety

GeoAI is improving public safety as it relates to traffic accidents, emergency response, and disaster management. Organizations are making communities safer by predicting where accidents are likely to occur and optimizing emergency response times. Damaged infrastructure and navigable roads can be quickly identified to help allocate first responders.

GeoAI is helping insurance organizations understand the impact of an event in hours instead of days to improve claim processing and efficiently help members. Insurance companies can use imagery and GeoAI to detect and classify damage that impacts its members. With this understanding, they can get members back on their feet more quickly.

GeoAI is transforming the architecture, engineering, and construction (AEC) industry with its ability to extract information from imagery, which feeds a digital twin. This data allows decision-makers to improve project management, identify potential risks, and optimize building performance. As a result, architecture firms can design energy-efficient buildings.

GeoAI is accelerating smart business decisions, delivering insight and predictions that drive better market planning, site selection, supply chain efficiency, and customer intelligence. With these insights, a business can respond to customer behavior and determine whether a new market area is viable based on pattern and predictive analysis of market characteristics.

A large green field with rows of silver square solar panels

GeoAI for good

By providing decision-makers with accurate and timely information, GeoAI has the potential to positively impact various areas of society and contribute to the greater good. Explore how GeoAI is unlocking benefits in areas such as public health and conservation.

Getting started with Esri

Shorten the time to insights

Combine the world’s most powerful GIS and location intelligence software with the scalability and power of AI. Esri’s long-standing expertise gives you a trusted solution for extracting meaning from big data. Eliminate the need for large amounts of training data, massive compute resources, and extensive AI knowledge. Modernize how you approach spatial problems at scale with Esri.

You don’t have to start from scratch

Getting started with GeoAI can sometimes feel like a daunting task. Use pretrained deep learning models and spatial machine learning tools backed by spatial experts. Our trained deep learning models provide the means for anyone to start extracting, classifying, detecting, and problem-solving with the data you have—no training data required. And our machine learning tools allow you to get started with UI-based tools with data-driven defaults that help guide you.

Satellite image showing a cluster of buildings with some outlined in red

FINE-TUNE TO YOUR NEEDS

Tweak our models to get them just right

With a starting point, you now have the means to focus on fine-tuning. Tweak our deep learning models and machine learning algorithms to fit your parameters and desired accuracy. We provide you the flexibility to tap into advanced settings and customize.

Vehicles on a 4-lane highway with a large red semi-trailer truck outlined with a green rectangle

BUILD CUSTOM MODELS

Integrate with open-source packages

If you have established methods, pair them with ours and models from the open-source ecosystem . Easily use popular models from libraries like Timm, MMDetection, and MMSegmentation. Leverage built-in connections to R and Python to bridge the gaps in your custom models.

A multi-colored aerial image in blue, green, red, and pink that identify parcels of land

Dive deeper

Discover more about spatial analysis, spatial data science, and working with remotely sensed data in ArcGIS.

Spatial analysis and data science

Imagery and remote sensing, learn how you can apply geoai.

Schedule a conversation with one of our experienced sales consultants. Tell us about the workflows you’re trying to improve, and we’ll show you how GeoAI can support your organization.

  • For individuals
  • For companies

Spatial Reasoning Test

What is spatial reasoning.

Gain insights into your spatial intellect. Boost your test scores and build your confidence by leveraging our comprehensive spatial reasoning questionnaire. Take your spatial reasoning skills to the next level and develop your cognitive ability through practice using this dynamic, fun test.

Why is spatial reasoning important?

How you can use this test, how it works, what's inside get immediate feedback by measuring these traits in you, assessment insights, scientific and empirical foundations.

Linn, M. C., & Petersen, A. C. (1985). Emergence and characterization of sex differences in spatial ability: A meta-analysis. Child Development, 56(6), 1479-1498. Spatial ability in STEM fields: Wai, J., Lubinski, D., & Benbow, C. P. (2009). Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. Journal of Educational Psychology, 101(4), 817-835. Importance of spatial reasoning in design and architecture: Ganis, G., & Kievit, R. A. (2015). A new set of three-dimensional shapes for investigating mental rotation processes: Validation data and stimulus set. Journal of Open Psychology Data, 3(1), e3. Spatial reasoning training: Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N. S. (2013). The malleability of spatial skills: A meta-analysis of training studies. Psychological Bulletin, 139(2), 352-402. Spatial reasoning and team performance: Stieff, M., & Uttal, D. (2015). How much can spatial training improve STEM achievement? Educational Psychology Review, 27(4), 607-615.

Spatial Awareness Test

Try out our other tests, related resources, our assessments are designed by top scientists, frequently asked questions, what is spatial awareness, how to improve spatial reasoning, how to do spatial reasoning tests.

spatial iq test

Enhancing Cognitive Abilities: The Power of Spatial IQ Tests

Introduction.

In today's fast-paced and competitive world, cognitive abilities play a crucial role in determining an individual's success. Intelligence Quotient (IQ) tests have long been used as a standard measure of general intelligence. However, there are specific domains of intelligence that go beyond traditional IQ tests. One such domain is spatial intelligence, which refers to the ability to visualize and manipulate objects in three-dimensional space. In this article, we explore the concept of spatial IQ tests, their importance, and how they can enhance cognitive abilities.

Understanding Spatial Intelligence

What is spatial intelligence.

Spatial intelligence, often referred to as visual-spatial intelligence, is the capacity to think and reason about objects in three dimensions. It involves mental visualization, spatial reasoning, and the ability to mentally manipulate shapes and forms. People with high spatial intelligence excel in fields such as architecture, engineering, mathematics, and design.

Why is Spatial Intelligence Important?

Spatial intelligence plays a vital role in various real-life scenarios. It helps individuals to navigate their surroundings effectively, solve complex problems, and visualize objects from different perspectives. Furthermore, spatial intelligence is closely linked to other cognitive abilities, including memory, attention, and problem-solving skills.

Spatial IQ Tests: Unveiling Your Hidden Potential

What are spatial iq tests.

Spatial IQ tests are specialized assessments designed to measure an individual's spatial intelligence. These tests typically involve tasks that require mental rotation, pattern recognition, visualization, and spatial reasoning abilities. By solving these tasks, individuals can gain insight into their spatial cognitive strengths and weaknesses.

Benefits of Taking a Spatial IQ Test

  • Self-Discovery: Spatial IQ tests provide individuals with valuable insights into their spatial cognitive abilities. They can identify their strengths and weaknesses in visualizing and manipulating objects in space.
  • Personal Growth: By understanding one's spatial intelligence, individuals can focus on developing their weaker areas and enhancing their overall cognitive abilities. This self-improvement can positively impact various aspects of life, including academic performance and professional growth.
  • Career Guidance: Spatial IQ tests can guide individuals towards careers that align with their spatial intelligence. Discovering a strong aptitude for spatial reasoning can lead to pursuing professions in fields like architecture, engineering, interior design, or computer graphics.
  • Problem-Solving Skills: Spatial IQ tests challenge individuals to think critically, solve puzzles, and manipulate objects mentally. By regularly engaging in spatial problem-solving tasks, individuals can sharpen their problem-solving skills, which are valuable in various domains.

Improving Spatial Intelligence

  • Practice Visualization: Engage in activities that require visualizing objects or scenarios, such as solving jigsaw puzzles, playing chess, or participating in virtual reality games. Regular practice enhances your ability to visualize and manipulate objects mentally.
  • Play Strategy Games: Strategy games like chess or video games that involve spatial navigation and problem-solving can improve your spatial intelligence. These games require you to think strategically, plan your moves, and understand the spatial relationships between different game elements.
  • Explore Spatial Concepts: Learn about geometry, architecture, and design principles. Understanding spatial concepts and exploring their applications can broaden your spatial thinking and reasoning abilities.
  • Seek Spatial Challenges: Engage in activities that present spatial challenges, such as assembling furniture, solving spatial puzzles, or learning to read maps. These activities force you to think spatially and strengthen your mental rotation skills.

Spatial IQ tests offer a unique opportunity to understand and enhance your spatial intelligence, a vital aspect of cognitive abilities. By uncovering your spatial cognitive strengths and weaknesses, you can focus on developing and refining your skills in visualization, mental rotation, and spatial reasoning. Whether you aim to excel in a specific career or simply want to enhance your problem-solving skills, spatial IQ tests provide valuable insights and guidance. Embrace the power of spatial intelligence and unlock your hidden potential.

DATA ANALYSIS NOTES

Sample: n=220

84 results from “general population” (IQ = 90-100)

136 results from niche segment of population (IQ = 110-120)

Figure Slicing (FS) data is compromised due to change in question order and question point values in middle of data collection period.

The difference between the older data and the newer data is not significant when comparing it to data from the other subtests. Therefore, the two different layers of data are used alongside each other and seem to have little effect on overall results.

Subtest means are almost equal. Slight differences can be easily adjusted with the algorithm IQ = score + X, where we can change X to anything.

Below is the histogram of the overall spatial IQ (SIQ), with the range containing the mean highlighted. Distribution is close to normal, statistical tests of normality were not applied.

Standard deviation (SD) is too high at 18. Test range is roughly 78 IQ – 155 IQ. We are aiming for a standard deviation of around 14, not 15, due to the high floor.

A representative sample was extracted from the data, where the 84 “general pop” results acted as the 68% of results from 85-115 IQ, while 16 results were extracted from the “niche segment of pop”, who acted as the 15% of results above 115 on a normal distribution curve. The standard deviation was lower in this representative sample at 16.3. Still too high.

Since the SIQ takes the average of 3 subtest scores (the best 3), the SIQ SD will always be slightly lower than any of the subtest SDs. The SDs for Figure Slicing and Orientation are fine. Viewpoints and Rapid Rotation need to be fixed.

Test is reliable. Cronbach’s Alpha is 0.78. This should improvement with refinement.

It is not possible to use Cronbach’s on each subtest due to the cutoff feature. It also affects item discrimination, item difficulty etc.

Here is the intercorrelation matrix for the subtests and the SIQ:

Viewpoints has the highest correlation with SIQ. Viewpoints is also is the most complex spatial task, and has the largest sex difference. Viewpoints is likely the strongest subtest in the CASA. These intercorrelations of 0.50 suggest that each subtest measures spatial ability to some degree, but each test offers something unique. They do not all measure the same thing.

I conducted several studies in order to record “normal” data from the general population. The website Prolific was used. Participants were paid a small sum to complete the test, and were incentivized to make serious attempts by receiving bonus payments for higher scores. This allowed me to collect female results, which cannot be obtained from internet forums as females are not interested. There was a consistent male advantage across all subtests as expected. This validates the test, as any measure of spatial ability should have sex differences. It is the largest and maybe the only mental difference between the sexes. The male advantage increases the more complex the task is. Rapid Rotation is the simplest subtest and has the lowest sex difference.

Orientation:

Normal distribution curve.

Orientation questions need to drop in value as the test goes on, since participants become better at answering the questions during the test. They have never come across this style of questions, so by question 10, they have had some practice at it, and generally get much better. The questions are quite similar in nature, and the time limit stays the same throughout the subtest. However, the difficulty increases slightly per question, where there is greater rotation required before calculating the angle.

Since this analysis, the test has been changed:

• Time limit reduced from 30 seconds to 25 seconds for each question

• Question order slightly changed according to item difficulty

• One question (#15) was replaced by a new question

The questions cannot be ordered completely by difficulty, as this makes the test far too monotonous; each question would be only slightly different angle and position to the previous question. Orientation has the lowest standard deviation of all 4 subtests, likely due to the layered scoring system and higher number (7) of distractor answers.

Rapid Rotation:

The standard deviation is 22, with too many people at the extremes. Too many people were able to reach the end or near, and too many people failed in the 1st and 2nd round. However, a SD of 22 is reasonable considering this subtest has by far the lowest cutoff threshold, which is necessary due to the low number (1) of distractor answers per question.

• Round 2 time limit extended from 10 seconds to 12 seconds

• Round 3 time limit reduced from 5 seconds to 4 seconds

• 3 questions with low difficulty were replaced

• Round 3 structure altered from 15 questions to 17 questions

I briefly looked at the table of questions and could see that #6, #7 and #17 were a bit too easy. These questions also had significantly lower item-total correlations compared to the other questions. They were removed and the question order was very slightly changed. The changes should add bulk to the centre of the histogram and lower the standard deviation.

Figure Slicing:

The Figure Slicing test is different from the other subtests in that the items are quite different from one another, with significant variation in difficulty level and discriminating effects. As such, it is crucial to get the order correct. The subtest must be ordered by difficulty due to the cutoff feature, if question #18 is as easy as #6, for example, it will slightly inflate the scores of those who reach question #18.

As mentioned, the question order was changed halfway through the data collection period. This would have affected results, but the effects are not easy to detect, nor are they significant enough to doubt any of the conclusions drawn from the data.

Item discrimination analysis was performed. Item discrimination is difficult to measure, again due to the cutoff feature. It tends to increase toward the end of the subtest as the threshold is reached by a higher proportion of participants the further the subtest goes.

Therefore, to determine which items had poor discrimination ability, they were compared to the trend line of the subtest.

We can see that questions #5, #7, #13 and #18 are poor discriminators compared to the average. In contrast, questions #2, #3, #8 and #9 were good discriminators compared to the average. Although there were 25 questions in the subtest, the item discrimination is not reliable for the last 5 questions since so few even get to respond.

• Question order changed (again)

• 4 questions were removed

The questions cannot be ordered completely by difficulty, since the value of some questions comes from their position in the test. The test now features 21 questions instead of 25. The four questions that were removed are shown below.

Viewpoints:

This subtest displays the highest initial SD at 26. Too many participants hit the floor and the ceiling. After observation of multiple participants during testing, it seems there is a lack of conceptual understanding of the task by some. Among those who do understand, too many reach the end. The histogram looks like this:

It is clear that the test has some issues, especially at the extremities. Since this analysis, the test has been changed:

• Round 1 time limit reduced from 30 seconds to 20 seconds

• Round 2 time limit reduced from 20 seconds to 12 seconds

• Round 3 time limit reduced from 10 seconds to 7 seconds

• Round 1 structure changed from 4 questions to 5 questions

• Round 2 structure changed from 12 questions to 10 questions

• Round 3 structure changed from 18 questions to 17 questions

• Cutoff threshold increased by 1

• Animated instructions added to increase understanding

• 2 questions with low difficulty were replaced

• Questions order slightly changed

The subtest structure now mimics that of Rapid Rotation, with a different time limit in round 3 (7 seconds instead of 4). The only other difference is the cutoff threshold which is much higher in Viewpoints.

In the test instructions, there are two practice questions, each with a subsequent animation showing the perspective revolving around the cube. There should now be a much clearer understanding of what the question is asking, as well as the mental visualisations and movements necessary to answer the questions.

The changes made should all have the effect of bulking up the center of the histogram, although the left edge will still be bulky given the complexity of the task and the reasonably high floor. The SD is expected to drop from 26 to around 20.

Share this post

Lifestylogy

Lifestylogy

24 Of The Smartest Animals On The Planet

Posted: May 6, 2024 | Last updated: May 6, 2024

<p>The intelligence of animals is a testament to the complexity and diversity of life on Earth. Across various habitats, numerous species exhibit remarkable cognitive abilities that challenge our traditional views of intelligence. These animals navigate their environments, solve problems, and communicate in ways that demonstrate not just instinct, but understanding and adaptability. From the use of tools by primates to the sophisticated social structures of cetaceans, the examples of animal intelligence are both vast and varied. This exploration into the minds of non-human creatures highlights the intricate and often surprising capabilities that have evolved in the animal kingdom.</p>

The intelligence of animals is a testament to the complexity and diversity of life on Earth. Across various habitats, numerous species exhibit remarkable cognitive abilities that challenge our traditional views of intelligence. These animals navigate their environments, solve problems, and communicate in ways that demonstrate not just instinct, but understanding and adaptability. From the use of tools by primates to the sophisticated social structures of cetaceans, the examples of animal intelligence are both vast and varied. This exploration into the minds of non-human creatures highlights the intricate and often surprising capabilities that have evolved in the animal kingdom.

<p>Chimpanzees are the closest living relatives to humans, capable of using tools, learning language symbols, and showing empathy. These primates demonstrate remarkable intellectual abilities, such as solving complex problems and using tools for food acquisition. They can learn sign language and other symbolic forms of communication, showcasing their capacity to understand abstract concepts. Furthermore, their social interactions exhibit signs of empathy, cooperation, and understanding, highlighting their emotional depth and complex societal structures.</p>

Chimpanzees

Chimpanzees are the closest living relatives to humans, capable of using tools, learning language symbols, and showing empathy. These primates demonstrate remarkable intellectual abilities, such as solving complex problems and using tools for food acquisition. They can learn sign language and other symbolic forms of communication, showcasing their capacity to understand abstract concepts. Furthermore, their social interactions exhibit signs of empathy, cooperation, and understanding, highlighting their emotional depth and complex societal structures.

<p>Dolphins are renowned for their complex social behaviors, problem-solving skills, and the ability to recognize themselves in mirrors. Their intelligence is reflected in their sophisticated communication systems, intricate social networks, and collaborative hunting strategies. Dolphins have demonstrated self-awareness through mirror tests, a trait that is considered a hallmark of higher intelligence. They also engage in playful behavior and can form strong social bonds, both within and across species.</p>

Dolphins are renowned for their complex social behaviors, problem-solving skills, and the ability to recognize themselves in mirrors. Their intelligence is reflected in their sophisticated communication systems, intricate social networks, and collaborative hunting strategies. Dolphins have demonstrated self-awareness through mirror tests, a trait that is considered a hallmark of higher intelligence. They also engage in playful behavior and can form strong social bonds, both within and across species.

<p>Elephants display problem-solving abilities, complex social structures, and have been observed using tools. These gentle giants exhibit a deep emotional intelligence, mourning their dead and showing empathy towards others. Their social life is intricate, with families led by matriarchs and complex communication systems involving both vocal and seismic signals. Elephants use branches to swat flies and leaves to cover water wells they have dug, showcasing their ability to use tools in their environment.</p>

Elephants display problem-solving abilities, complex social structures, and have been observed using tools. These gentle giants exhibit a deep emotional intelligence, mourning their dead and showing empathy towards others. Their social life is intricate, with families led by matriarchs and complex communication systems involving both vocal and seismic signals. Elephants use branches to swat flies and leaves to cover water wells they have dug, showcasing their ability to use tools in their environment.

<p>African Grey Parrots are exceptional at mimicry, understanding concepts of shape, color, and number. Known for their impressive cognitive abilities, they can learn a vast vocabulary and use words in context, demonstrating an understanding of meaning beyond simple repetition. These birds are capable of solving puzzles and making tools, which points to a sophisticated level of intelligence. Their ability to comprehend and communicate complex concepts makes them one of the most intelligent bird species.</p>

African Grey Parrots

African Grey Parrots are exceptional at mimicry, understanding concepts of shape, color, and number. Known for their impressive cognitive abilities, they can learn a vast vocabulary and use words in context, demonstrating an understanding of meaning beyond simple repetition. These birds are capable of solving puzzles and making tools, which points to a sophisticated level of intelligence. Their ability to comprehend and communicate complex concepts makes them one of the most intelligent bird species.

<p>Octopuses are known for their problem-solving skills, ability to escape enclosures, and use tools. These cephalopods have a highly developed brain and exhibit behaviors such as opening jars and mimicking other species for survival. They can navigate complex mazes and change their skin color and texture for camouflage, showcasing their adaptability and intelligence. Octopuses’ use of tools, such as using coconut shells for shelter, highlights their cognitive sophistication.</p>

Octopuses are known for their problem-solving skills, ability to escape enclosures, and use tools. These cephalopods have a highly developed brain and exhibit behaviors such as opening jars and mimicking other species for survival. They can navigate complex mazes and change their skin color and texture for camouflage, showcasing their adaptability and intelligence. Octopuses’ use of tools, such as using coconut shells for shelter, highlights their cognitive sophistication.

<p>Orangutans use tools in the wild, can learn sign language, and demonstrate foresight by planning. These intelligent primates are known for their ability to use leaves as gloves or umbrellas and sticks to extract termites from their mounds. Their capacity to learn sign language and communicate with humans underscores their cognitive abilities. Orangutans also plan their travel routes in advance, indicating a high level of intelligence and spatial awareness.</p>

Orangutans use tools in the wild, can learn sign language, and demonstrate foresight by planning. These intelligent primates are known for their ability to use leaves as gloves or umbrellas and sticks to extract termites from their mounds. Their capacity to learn sign language and communicate with humans underscores their cognitive abilities. Orangutans also plan their travel routes in advance, indicating a high level of intelligence and spatial awareness.

<p>Ravens and Crows exhibit problem-solving skills, can make tools, and understand causality. These birds are known for their ability to solve complex problems, such as pulling strings in a sequence to obtain a reward, and crafting tools from their environment to access food. Their understanding of cause and effect is demonstrated through their manipulation of objects and situations to their advantage. The intelligence of ravens and crows is further evidenced by their complex social structures and ability to remember human faces.</p>

Ravens and Crows

Ravens and Crows exhibit problem-solving skills, can make tools, and understand causality. These birds are known for their ability to solve complex problems, such as pulling strings in a sequence to obtain a reward, and crafting tools from their environment to access food. Their understanding of cause and effect is demonstrated through their manipulation of objects and situations to their advantage. The intelligence of ravens and crows is further evidenced by their complex social structures and ability to remember human faces.

<p>Pigs show advanced cognitive abilities, including problem-solving, emotional awareness, and social learning. They are capable of complex emotions and exhibit signs of empathy towards other pigs. Studies have shown pigs can learn to navigate mazes, understand symbolic languages, and even play video games with a joystick. Their social behaviors and ability to learn from each other indicate a high level of intelligence and cognitive complexity.</p>

Pigs show advanced cognitive abilities, including problem-solving, emotional awareness, and social learning. They are capable of complex emotions and exhibit signs of empathy towards other pigs. Studies have shown pigs can learn to navigate mazes, understand symbolic languages, and even play video games with a joystick. Their social behaviors and ability to learn from each other indicate a high level of intelligence and cognitive complexity.

<p>Rats are known for their ability to navigate mazes, demonstrate social learning, and have a high degree of empathy. These rodents have been shown to perform altruistic behaviors, such as freeing a trapped companion, indicating emotional depth. Rats can learn complex tasks, remember them for long periods, and teach these skills to other rats. Their adaptability and problem-solving abilities make them subjects of interest in scientific research on learning and memory.</p>

Rats are known for their ability to navigate mazes, demonstrate social learning, and have a high degree of empathy. These rodents have been shown to perform altruistic behaviors, such as freeing a trapped companion, indicating emotional depth. Rats can learn complex tasks, remember them for long periods, and teach these skills to other rats. Their adaptability and problem-solving abilities make them subjects of interest in scientific research on learning and memory.

<p>Cats show problem-solving abilities, have complex social structures in the wild, and demonstrate a variety of learned behaviors. Independent and curious, cats often explore their environment, manipulate objects to achieve goals, and learn through observation. They establish territories and social hierarchies when living in groups, indicating a complex understanding of social organization. Cats’ ability to learn tricks, understand basic commands, and navigate their environment reflects their intelligence and adaptability.</p>

Cats show problem-solving abilities, have complex social structures in the wild, and demonstrate a variety of learned behaviors. Independent and curious, cats often explore their environment, manipulate objects to achieve goals, and learn through observation. They establish territories and social hierarchies when living in groups, indicating a complex understanding of social organization. Cats’ ability to learn tricks, understand basic commands, and navigate their environment reflects their intelligence and adaptability.

<p>Squirrels exhibit problem-solving skills, especially in food storage and retrieval strategies. They are adept at navigating complex environments, using memory and spatial awareness to relocate hidden caches of food. Squirrels also display deceptive behaviors, such as pretending to bury food to throw off potential thieves. Their ability to adapt to different ecosystems and innovate in food gathering and storage techniques underscores their cognitive abilities.</p>

Squirrels exhibit problem-solving skills, especially in food storage and retrieval strategies. They are adept at navigating complex environments, using memory and spatial awareness to relocate hidden caches of food. Squirrels also display deceptive behaviors, such as pretending to bury food to throw off potential thieves. Their ability to adapt to different ecosystems and innovate in food gathering and storage techniques underscores their cognitive abilities.

<p>Horses have excellent memory, can solve problems, and understand human cues. Known for their sensitivity and perceptiveness, horses can recognize individuals, both human and equine, and remember complex pathways or courses. They respond to vocal commands and gestures, indicating an understanding of human communication. Horses also exhibit problem-solving abilities, such as opening gates or untangling themselves from ropes, demonstrating their ability to learn and adapt to their environment.</p>

Horses have excellent memory, can solve problems, and understand human cues. Known for their sensitivity and perceptiveness, horses can recognize individuals, both human and equine, and remember complex pathways or courses. They respond to vocal commands and gestures, indicating an understanding of human communication. Horses also exhibit problem-solving abilities, such as opening gates or untangling themselves from ropes, demonstrating their ability to learn and adapt to their environment.

<p>Sea Lions can follow complex commands, demonstrate logical reasoning, and have a sense of rhythm. They are capable of understanding syntax and sequences in commands, showing a remarkable ability to learn and remember tasks. Sea lions have been observed solving puzzles and using deductive reasoning to achieve goals. Their ability to keep a beat and move in rhythm with music highlights their cognitive flexibility and awareness.</p>

Sea Lions can follow complex commands, demonstrate logical reasoning, and have a sense of rhythm. They are capable of understanding syntax and sequences in commands, showing a remarkable ability to learn and remember tasks. Sea lions have been observed solving puzzles and using deductive reasoning to achieve goals. Their ability to keep a beat and move in rhythm with music highlights their cognitive flexibility and awareness.

<p>Border Collies are known for their intelligence, ability to understand numerous words and commands, and problem-solving capabilities. Regarded as one of the most intelligent dog breeds, they excel in obedience and can learn complex commands quickly. Their ability to herd sheep and work in coordination with humans showcases their understanding of teamwork and strategy. Border Collies’ problem-solving skills are evident in their ability to navigate obstacle courses and adapt to new challenges.</p>

Border Collies

Border Collies are known for their intelligence, ability to understand numerous words and commands, and problem-solving capabilities. Regarded as one of the most intelligent dog breeds, they excel in obedience and can learn complex commands quickly. Their ability to herd sheep and work in coordination with humans showcases their understanding of teamwork and strategy. Border Collies’ problem-solving skills are evident in their ability to navigate obstacle courses and adapt to new challenges.

<p>Similar to chimpanzees with high emotional intelligence, use of tools, and complex social behaviors, Bonobos are known for their peaceful and cooperative societies. They engage in intricate social interactions, often using sex as a means of reducing conflict and strengthening social bonds. Their use of tools in the wild for food acquisition and other purposes highlights their intellectual capabilities. Bonobos also exhibit a variety of facial expressions and gestures to communicate, demonstrating their emotional depth and social complexity.</p>

Similar to chimpanzees with high emotional intelligence, use of tools, and complex social behaviors, Bonobos are known for their peaceful and cooperative societies. They engage in intricate social interactions, often using sex as a means of reducing conflict and strengthening social bonds. Their use of tools in the wild for food acquisition and other purposes highlights their intellectual capabilities. Bonobos also exhibit a variety of facial expressions and gestures to communicate, demonstrating their emotional depth and social complexity.

<p>Bees show complex communication through the waggle dance, problem-solving abilities, and navigation skills. Their dance communicates the direction and distance of food sources to hive mates, demonstrating an intricate form of non-verbal communication. Bees are capable of learning and remembering landmarks to navigate between their hive and food sources. Their ability to solve problems, such as finding the most efficient routes to flowers,  underscores their cognitive abilities.</p>

Bees show complex communication through the waggle dance, problem-solving abilities, and navigation skills. Their dance communicates the direction and distance of food sources to hive mates, demonstrating an intricate form of non-verbal communication. Bees are capable of learning and remembering landmarks to navigate between their hive and food sources. Their ability to solve problems, such as finding the most efficient routes to flowers, underscores their cognitive abilities.

<p>Ants demonstrate complex social organization, farming abilities, and the construction of intricate colonies. They work in highly coordinated groups to gather food, defend the colony, and care for their young, showing an advanced level of collective intelligence. Ants farm aphids for honeydew and cultivate fungi, exhibiting an understanding of agriculture. Their ability to build complex structures and navigate long distances to find food highlights their problem-solving skills and spatial awareness.</p>

Ants demonstrate complex social organization, farming abilities, and the construction of intricate colonies. They work in highly coordinated groups to gather food, defend the colony, and care for their young, showing an advanced level of collective intelligence. Ants farm aphids for honeydew and cultivate fungi, exhibiting an understanding of agriculture. Their ability to build complex structures and navigate long distances to find food highlights their problem-solving skills and spatial awareness.

<p>Whales have complex social structures, communication methods, and display signs of culture. They are known for their song, which can travel vast distances underwater and is unique to each group, suggesting a form of cultural identity. Whales demonstrate empathy, collaborating in hunting and even helping other species in distress. Their large brains and complex behaviors indicate a high level of cognitive ability and social complexity.</p>

Whales have complex social structures, communication methods, and display signs of culture. They are known for their song, which can travel vast distances underwater and is unique to each group, suggesting a form of cultural identity. Whales demonstrate empathy, collaborating in hunting and even helping other species in distress. Their large brains and complex behaviors indicate a high level of cognitive ability and social complexity.

<p>Raccoons are known for their problem-solving abilities and dexterity with their paws. These nocturnal creatures can open locks, turn knobs, and untie knots, showcasing their intelligence and adaptability. Raccoons have a remarkable ability to remember solutions to tasks for up to three years, indicating a high level of memory retention. Their curious nature and ability to navigate complex environments make them skilled at finding food and avoiding predators.</p>

Raccoons are known for their problem-solving abilities and dexterity with their paws. These nocturnal creatures can open locks, turn knobs, and untie knots, showcasing their intelligence and adaptability. Raccoons have a remarkable ability to remember solutions to tasks for up to three years, indicating a high level of memory retention. Their curious nature and ability to navigate complex environments make them skilled at finding food and avoiding predators.

<p>Gorillas can learn sign language, show empathy, and use tools in the wild. They have been taught to communicate with humans through sign language, demonstrating an ability to understand complex concepts and emotions. Gorillas use sticks to measure the depth of water and gather food, evidencing their problem-solving skills and use of tools. Their gentle nature and strong family bonds show a deep emotional capacity and social intelligence.</p>

Gorillas can learn sign language, show empathy, and use tools in the wild. They have been taught to communicate with humans through sign language, demonstrating an ability to understand complex concepts and emotions. Gorillas use sticks to measure the depth of water and gather food, evidencing their problem-solving skills and use of tools. Their gentle nature and strong family bonds show a deep emotional capacity and social intelligence.

<p>Manta Rays show recognition abilities and complex social behaviors. They are known to engage in playful activities, such as somersaults and interactions with divers, indicating a level of curiosity and intelligence. Studies suggest that manta rays can recognize themselves in mirrors, a trait previously thought to be unique to a few mammalian species. Their social interactions and communication patterns reveal a complex social structure and cognitive capabilities.</p>

Manta Rays show recognition abilities and complex social behaviors. They are known to engage in playful activities, such as somersaults and interactions with divers, indicating a level of curiosity and intelligence. Studies suggest that manta rays can recognize themselves in mirrors, a trait previously thought to be unique to a few mammalian species. Their social interactions and communication patterns reveal a complex social structure and cognitive capabilities.

<p>Pigeons have been shown to recognize themselves in mirrors, learn alphabet sequences, and have excellent navigation skills. These birds are capable of complex learning tasks, such as differentiating between photographs and even paintings by different artists. Pigeons’ remarkable homing ability allows them to return to their nests over long distances, showcasing their navigation and memory skills. Their ability to learn sequences of actions and solve problems indicates a high level of cognitive function.</p>

Pigeons have been shown to recognize themselves in mirrors, learn alphabet sequences, and have excellent navigation skills. These birds are capable of complex learning tasks, such as differentiating between photographs and even paintings by different artists. Pigeons’ remarkable homing ability allows them to return to their nests over long distances, showcasing their navigation and memory skills. Their ability to learn sequences of actions and solve problems indicates a high level of cognitive function.

<p>Some species of cephalopods, such as squids, show learning abilities, problem-solving, and use of camouflage in sophisticated ways. They can change color and texture to blend into their surroundings, a skill that requires acute environmental awareness and cognitive abilities. Squids communicate with each other using color patterns, which can indicate their mood or intentions. Their ability to learn through observation and solve problems, such as escaping from enclosures, highlights their intelligence and adaptability.</p>

Cephalopods (Squids)

Some species of cephalopods, such as squids, show learning abilities, problem-solving, and use of camouflage in sophisticated ways. They can change color and texture to blend into their surroundings, a skill that requires acute environmental awareness and cognitive abilities. Squids communicate with each other using color patterns, which can indicate their mood or intentions. Their ability to learn through observation and solve problems, such as escaping from enclosures, highlights their intelligence and adaptability.

<p>Wolves have complex social structures, problem-solving skills, and cooperative hunting strategies. They live in packs with a strict hierarchy and communicate through a variety of vocalizations, body language, and scent marking. Wolves’ ability to work together to trap and take down prey demonstrates their strategic thinking and teamwork. Their social learning capabilities allow them to pass on hunting techniques and territorial knowledge to younger members of the pack.</p>

Wolves have complex social structures, problem-solving skills, and cooperative hunting strategies. They live in packs with a strict hierarchy and communicate through a variety of vocalizations, body language, and scent marking. Wolves’ ability to work together to trap and take down prey demonstrates their strategic thinking and teamwork. Their social learning capabilities allow them to pass on hunting techniques and territorial knowledge to younger members of the pack.

<p>Our exploration of the animal kingdom’s cognitive and emotional realms reveals the profound intelligence and sensitivity that exist beyond humanity. These discoveries not only challenge our preconceptions about intelligence and consciousness but also call for a reevaluation of our ethical responsibilities towards other living beings. By recognizing the depth and complexity of animal minds, we can forge a more compassionate and sustainable relationship with the natural world. This journey underscores the urgent need for conservation and protection efforts, as we share this planet with beings of remarkable intellect and emotion. Ultimately, understanding and respecting the inner lives of animals enriches our own existence, reminding us of the interconnectedness of all life on Earth.</p><p><a href="https://lifestylogy.com/?utm_source=msnstart">For the Latest Lifestyle, Food, Health & Fitness, head to Lifestylogy</a></p>

Our exploration of the animal kingdom’s cognitive and emotional realms reveals the profound intelligence and sensitivity that exist beyond humanity. These discoveries not only challenge our preconceptions about intelligence and consciousness but also call for a reevaluation of our ethical responsibilities towards other living beings. By recognizing the depth and complexity of animal minds, we can forge a more compassionate and sustainable relationship with the natural world. This journey underscores the urgent need for conservation and protection efforts, as we share this planet with beings of remarkable intellect and emotion. Ultimately, understanding and respecting the inner lives of animals enriches our own existence, reminding us of the interconnectedness of all life on Earth.

For the Latest Lifestyle, Food, Health & Fitness, head to Lifestylogy

More for You

19 Things People Treat As Safe That Actually Are Pretty Dangerous

19 Things People Treat As Safe That Actually Are Pretty Dangerous

Video shows full fury of tornado that struck Lincoln, Nebraska in late April

Dashcam video captures powerful tornado wiping out building in Nebraska in late April

Rachell ‘Valkyrae' Hofstetter Has Conquered Gaming and Has Set Her Eyes on Animation: ‘I've Always Been Fascinated With It'

Rachell ‘Valkyrae' Hofstetter Has Conquered Gaming and Has Set Her Eyes on Animation: ‘I've Always Been Fascinated With It'

Media ABC News Godwin

Kim Godwin out as ABC News president after 3 years as first Black woman as network news chief

Hayley Williams (Paramore)

Rock Queens: Recognizing 25 Women Who Shaped the Music Industry

Duke Johnson

Veteran RB Officially Announces Retirement From NFL

Doctor shares what happens to our bodies moments before we die

Doctor shares what happens to our bodies moments before we die

Sean Connery Goldfinger

This James Bond Movie Has The Highest Rotten Tomatoes Score In The Franchise

I visited Walmart and saw more than 25 premium items you might not expect to find there

I visited Walmart and found over 25 products that show how the retail giant is trying to win over wealthier customers

Mitt Romney and Donald Trump (Photo: Associated Press)

Donald Trump Echoes Mitt Romney's Comment About People Automatically Voting Democrat

What Is the Most Poisonous Spider in the World?

What Is the Most Poisonous Spider in the World?

RAM 1500

A.I. Predicts What 20 Popular Cars Might Look Like in 100 Years

14 fascinating things most Americans don’t even know about the USA

14 fascinating things most Americans don’t even know about the USA

The movies leaving Netflix this month

36 movies to watch before they leave Netflix this month

California Mocked After Spending $11 Billion and 9 Years on High-Speed Rail Bridge to Nowhere

California Mocked After Spending $11 Billion and 9 Years on High-Speed Rail Bridge to Nowhere

Man raises concerns over growing phenomenon happening with cars across the world: 'A heartbreaking collective failure'

Man raises concerns over growing phenomenon happening with cars across the world: 'A heartbreaking collective failure'

Tom Selleck, star of

Tom Selleck on the future of "Blue Bloods"

The 25 most memorable fictional bands

The 25 most memorable fictional bands

I drove the Tesla Cybertruck. These 7 design flaws surprised me.

I drove the Tesla Cybertruck. These 7 design flaws surprised me.

Forrest Gump

Netflix now showing all-time classic – 30 years after landmark six-time Oscars win

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Development of spatial ability extra tasks (SAET): problem solving with spatial intelligence

Profile image of Saeed Esmailnia Meidani

2022, Quality & Quantity

Spatial ability contributes to performance in science, technology, engineering and mathematics (STEM). Spatial skills and creativity are required for engineering studies. Low spatial abilities can lead to the dropout of students' university studies. In this study the Spatial Ability Extra Tasks (SAET) was developed to evaluate engineering students' complex spatial abilities. A total of 93 first-year engineering students from University of Debrecen Faculty of Engineering and Sharif University of Technology in Tehran participated, with regard to final mathematical exam and their gender, participated in the study. SAET measures parts of spatial abilities: mental cutting and mental rotation and creativity. Analysis of the findings suggested that SAET is valid and reliable. The separate tests results have been statistically evaluated and conclusions were formulated. We used Structural Equation Modeling analysis. We separate two types of tasks by SAET which are Polyhedron part and Curved Surface part. According to obtained data, accomplished the results: students of University of Debrecen are more successful at Curved Surfaces. In addition students of Sharif University are more successful at Polyhedrons. The square cross section was found by most student in both countries in Polyhedrons. It is remarkable that first-year engineering students of Tehran are more successful at Polyhedrons by pentagon, hexagon and parallelogram solution; and students of Debrecen are more successful by square and rectangle solution. Students of Debrecen are more successful at Curved Surfaces to find circle solution of cylinder, cone and sphere; students of Tehran are more successful by finding parabola solution of cone.

Related Papers

Periodica Polytechnica Social and Management Sciences

Saeed Esmailnia Meidani

The goal of this paper is to compare freshman engineering students&#39; spatial abilities (Spatial Intelligence) at two universities: Sharif University in Tehran and Debrecen University of Hungary, focusing on both their final mathematical exam performance and their gender so as to ascertain whether the students differ significantly in terms of their spatial abilities and/or their problem solving methods. The tests used to measure spatial intelligence performance and mental rotation was the Purdue Spatial Visualization Test (PSVT Branoff). The test results have been statistically evaluated and conclusions formulated. The results show that there was no significant difference between Iranian and Hungarian freshman engineering students in the performance of mental rotation tasks. However, a general gender difference in spatial ability performance was evident among the Hungarian students but not among the Iranians. The results also shed light on spatial rotation problem-solving methods ...

spatial intelligence problem solving

International Journal of Science and Applied Science: Conference Series

Sigit Rimbatmojo

Visual-spatial intelligence is one of the multiple bits of intelligence that important to solve a mathematics problem, especially in geometry. This present research investigates the profile of students’ visual-spatial intelligence. This research focuses on analysis and description of students’ visual-spatial intelligence level generally and its aspect when solving the geometric problem. Visual-spatial intelligence aspect, there is imagination, pattern seeking, problem-solving, and conceptualization. Qualitative research with case study strategy was used in this research. The subject in this research involved 12 students of 11th grades chosen with purposive sampling. Data in this research were students’ visual-spatial intelligence test result and task based interviews. They were asked to complete visual-spatial intelligence test before interview. The data was analyzed based on visual-spatial intelligence aspect of female and male students. The results of this research show that femal...

Tehnicki vjesnik - Technical Gazette

james mburu

Procedia - Social and Behavioral Sciences

Nuran Güzel

IOP Conference Series: Earth and Environmental Science

yukma wijaya

Advances in Social Science, Education and Humanities Research

Miftakhul Rohmah

Facta Universitatis

Maja Ilic , Aleksandra M Djukic

Specialized spatial skills are necessary for success in various fields of STEM (science, technology, engineering, and mathematics) education. Technical disciplines are an academic field where the largest correlation with spatial skills has been noticed, and therefore spatial skills have been included in the entrance exams of study of architecture at the University of Banja Luka. Given that the scientific community has not reached consensus on what the spatial abilities are, there are various tests and tools used for its assessment, listed by factors that they measure. The paper will present typology of these factors and the variety of tests used for their assessment. This typology of tasks will be compared to the entrance exams held at the University of Banja Luka in the period 2005-2013. Also, the results of entrance exams will be compared with the student's success in specific groups of subjects during the study period to see if there would be any correlation among them. Results indicate at the emergence of a new factor in assessing the ability of candidates to study architecture-ability of divergent thinking. This correlation of divergent thinking and spatial ability has also been a topic of the latest research in cognitive psychology.

Malikussaleh Journal of Mathematics Learning (MJML)

Trisna Yuniarti

This study aims to determine the relationship of junior high school students’ spatial and mathematical logical intelligence based on understanding the concepts. Spatial intelligence indicators in this study are reviewed from students’ abilities in using images as a tool for solving problems, connecting data with concepts that have been held, and finding patterns in solving problems. On the other hand, the indicators of students’ mathematical logical intelligence is observed by looking at students&#39; abilities to mention and understand information that is known in a problem, to draw up a plan of completion and to do mathematical calculations correctly. The indicators of understanding concepts are restating a concept, providing examples and non-examples of concepts, and applying concepts or algorithms in problem solving. The research method in this research is the Ex Post Facto correlational type, with using a purposive sampling as the sampling technique. The results of the data ana...

yopi gumilar

RELATED PAPERS

rapport nr.: Kandidatuppsats VT10-2611-10 U/ …

veronica duran

Jorge Carriel

Journal of the National Medical Association

Heather Butts

TREVOR SANDWITH

Aziz Ghaderpour

Rene Antonio Robleto Cabrera

Neurología Argentina

Eduardo Knorre

Ali Ramadhani

Applied Physics Letters

Steven Denbaars

International Journal of Knowledge-Based Development

Rachida Benabbou

Patrizia Fortini

patrizia fortini

Linda Fogarty

Journal of Applied Crystallography

Abel Moreno

Journal of Mathematical Physics

Nasser SAAD

Social Sciences in China

Graeme Lang

Journal of The Electrochemical Society

Sílvia Nunes

Sains Humanika

Zaki Amiruddin

Fiona Somerset

Journal of Archaeological Science: Reports Volume 55, May 2024, 104495

Sergiu Bodean

André Luiz Kopelke

Arash Bidadkosh

Procedia Computer Science

Ana Pereira

PsycEXTRA Dataset

Talib Hussain

Rafał Kopeć

maxence Bigerelle

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

More From Forbes

Why AI Challenges Us To Become More Human

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

In an era where artificial intelligence is reshaping the boundaries of what machines can do, we find ourselves at a pivotal moment in history. AI isn’t just a technological upgrade; it's a mirror reflecting our potential to evolve as a species. As these intelligent systems take over routine and repetitive tasks, they challenge us to delve deeper into what makes us uniquely human: our creativity, empathy, and the ability to navigate complex social dynamics. Let’s explore why the rise of AI might actually be the best thing to push humanity towards realizing its full potential.

The Unfulfilled Potential Of Human Creativity

Every day, countless hours are spent on tasks that, frankly, do not require the distinct capabilities of the human brain. Data entry, managing bookings, and even diagnosing common medical conditions are just a few examples. These tasks, while important, are mechanical—often predictable and repetitive. It's in this mundane reality that AI steps in, not as a replacement for human effort but as a liberator of human potential.

Imagine a world where the bulk of such tasks is handled by AI. This isn't a distant future scenario; it's already happening. AI applications in business, healthcare, and even creative industries are taking over the drudgery, enabling us to focus on tasks that require a human touch—innovation, strategy, and personal interaction. This shift is monumental, akin to the Industrial Revolution, but instead of mechanical muscle, we're leveraging digital brains.

Creative Problem Solving With AI

The real magic happens when AI and human intelligence are combined to tackle complex problems. Consider the field of environmental science, where AI can analyze vast datasets of climate patterns far quicker than any human team. However, interpreting these patterns and strategizing impactful interventions require human ingenuity and ethical consideration—qualities that AI has yet to master.

Another compelling example is in artistic endeavors. AI can now compose music or generate graphic art, but it lacks the nuanced understanding of what captivates human emotions and cultural contexts. Artists collaborating with AI find that it can act as a powerful tool to extend their own creative capabilities, pushing the boundaries of traditional art forms into new and unexplored territories.

Sony Is Making A Truly Terrible Mistake With ‘Helldivers 2’ — Update: Sony Reverses Course

‘baby reindeer’: stephen king writes essay praising netflix stalker series, apple iphone 16 new design and performance upgrades revealed in leak, human + ai collaboration: a new frontier.

The synergy between human and machine opens up new frontiers for exploration and innovation. In healthcare, AI systems analyze medical data at superhuman speeds, but doctors provide the compassionate care and nuanced understanding that only a human can offer. Together, they achieve better outcomes, with AI handling data-driven tasks and humans focusing on patient care.

In business, AI tools predict consumer behavior through algorithms, but marketing professionals use these insights to craft creative and emotionally engaging campaigns that resonate on a human level. The technology identifies patterns, but the marketer tells the story.

The Future Is Human

As AI takes care of the ‘robotic’ aspects of work, humans are nudged towards roles that require creative problem-solving, emotional intelligence, moral judgment, and personal interaction. This isn’t just about job displacement; it’s about job transformation. It challenges us to redefine our roles in society and encourages the education system to focus more on critical thinking, creativity, emotional intelligence, and adaptability.

The question now is not whether AI will replace many of the tasks we currently do—it will—but what we do with the immense potential unleashed when this happens. As we delegate the routine to machines, we must cultivate our distinctly human abilities to engage, inspire, and innovate.

AI doesn't just challenge us to be more human; it demands it. By automating the mundane, AI not only frees our time but elevates our purpose. We are not moving towards an era where machines rule but one where they help us rediscover and reimagine what it means to be human. This is the paradox of our times: the more advanced our machines, the more we must tap into the depths of our human nature. In this new dawn, our most human traits are not our weaknesses but our greatest strengths.

Bernard Marr

  • Editorial Standards
  • Reprints & Permissions

The May 2024 issue of IEEE Spectrum is here!

For IEEE Members

Ieee spectrum, follow ieee spectrum, support ieee spectrum, enjoy more free content and benefits by creating an account, saving articles to read later requires an ieee spectrum account, the institute content is only available for members, downloading full pdf issues is exclusive for ieee members, downloading this e-book is exclusive for ieee members, access to spectrum 's digital edition is exclusive for ieee members, following topics is a feature exclusive for ieee members, adding your response to an article requires an ieee spectrum account, create an account to access more content and features on ieee spectrum , including the ability to save articles to read later, download spectrum collections, and participate in conversations with readers and editors. for more exclusive content and features, consider joining ieee ., join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, join the world’s largest professional organization devoted to engineering and applied sciences and get access to this e-book plus all of ieee spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, access thousands of articles — completely free, create an account and get exclusive content and features: save articles, download collections, and talk to tech insiders — all free for full access and benefits, join ieee as a paying member., ai copilots are changing how coding is taught, professors are shifting away from syntax and emphasizing higher-level skills.

Photo-illustration of a mini AI bot looking at a laptop atop a stock of books, sitting next to human hands on a laptop.

Generative AI is transforming the software development industry. AI-powered coding tools are assisting programmers in their workflows, while jobs in AI continue to increase. But the shift is also evident in academia—one of the major avenues through which the next generation of software engineers learn how to code.

Computer science students are embracing the technology, using generative AI to help them understand complex concepts, summarize complicated research papers, brainstorm ways to solve a problem, come up with new research directions, and, of course, learn how to code.

“Students are early adopters and have been actively testing these tools,” says Johnny Chang , a teaching assistant at Stanford University pursuing a master’s degree in computer science. He also founded the AI x Education conference in 2023, a virtual gathering of students and educators to discuss the impact of AI on education.

So as not to be left behind, educators are also experimenting with generative AI. But they’re grappling with techniques to adopt the technology while still ensuring students learn the foundations of computer science.

“It’s a difficult balancing act,” says Ooi Wei Tsang , an associate professor in the School of Computing at the National University of Singapore . “Given that large language models are evolving rapidly, we are still learning how to do this.”

Less Emphasis on Syntax, More on Problem Solving

The fundamentals and skills themselves are evolving. Most introductory computer science courses focus on code syntax and getting programs to run, and while knowing how to read and write code is still essential, testing and debugging—which aren’t commonly part of the syllabus—now need to be taught more explicitly.

“We’re seeing a little upping of that skill, where students are getting code snippets from generative AI that they need to test for correctness,” says Jeanna Matthews , a professor of computer science at Clarkson University in Potsdam, N.Y.

Another vital expertise is problem decomposition. “This is a skill to know early on because you need to break a large problem into smaller pieces that an LLM can solve,” says Leo Porter , an associate teaching professor of computer science at the University of California, San Diego . “It’s hard to find where in the curriculum that’s taught—maybe in an algorithms or software engineering class, but those are advanced classes. Now, it becomes a priority in introductory classes.”

“Given that large language models are evolving rapidly, we are still learning how to do this.” —Ooi Wei Tsang, National University of Singapore

As a result, educators are modifying their teaching strategies. “I used to have this singular focus on students writing code that they submit, and then I run test cases on the code to determine what their grade is,” says Daniel Zingaro , an associate professor of computer science at the University of Toronto Mississauga . “This is such a narrow view of what it means to be a software engineer, and I just felt that with generative AI, I’ve managed to overcome that restrictive view.”

Zingaro, who coauthored a book on AI-assisted Python programming with Porter, now has his students work in groups and submit a video explaining how their code works. Through these walk-throughs, he gets a sense of how students use AI to generate code, what they struggle with, and how they approach design, testing, and teamwork.

“It’s an opportunity for me to assess their learning process of the whole software development [life cycle]—not just code,” Zingaro says. “And I feel like my courses have opened up more and they’re much broader than they used to be. I can make students work on larger and more advanced projects.”

Ooi echoes that sentiment, noting that generative AI tools “will free up time for us to teach higher-level thinking—for example, how to design software, what is the right problem to solve, and what are the solutions. Students can spend more time on optimization, ethical issues, and the user-friendliness of a system rather than focusing on the syntax of the code.”

Avoiding AI’s Coding Pitfalls

But educators are cautious given an LLM’s tendency to hallucinate . “We need to be teaching students to be skeptical of the results and take ownership of verifying and validating them,” says Matthews.

Matthews adds that generative AI “can short-circuit the learning process of students relying on it too much.” Chang agrees that this overreliance can be a pitfall and advises his fellow students to explore possible solutions to problems by themselves so they don’t lose out on that critical thinking or effective learning process. “We should be making AI a copilot—not the autopilot—for learning,” he says.

“We should be making AI a copilot—not the autopilot—for learning.” —Johnny Chang, Stanford University

Other drawbacks include copyright and bias. “I teach my students about the ethical constraints—that this is a model built off other people’s code and we’d recognize the ownership of that,” Porter says. “We also have to recognize that models are going to represent the bias that’s already in society.”

Adapting to the rise of generative AI involves students and educators working together and learning from each other. For her colleagues, Matthews’s advice is to “try to foster an environment where you encourage students to tell you when and how they’re using these tools. Ultimately, we are preparing our students for the real world, and the real world is shifting, so sticking with what you’ve always done may not be the recipe that best serves students in this transition.”

Porter is optimistic that the changes they’re applying now will serve students well in the future. “There’s this long history of a gap between what we teach in academia and what’s actually needed as skills when students arrive in the industry,” he says. “There’s hope on my part that we might help close the gap if we embrace LLMs.”

  • How Coders Can Survive—and Thrive—in a ChatGPT World ›
  • AI Coding Is Going From Copilot to Autopilot ›
  • OpenAI Codex ›

Rina Diane Caballar is a writer covering tech and its intersections with science, society, and the environment. An IEEE Spectrum Contributing Editor, she's a former software engineer based in Wellington, New Zealand.

Bruce Benson

Yes! Great summary of how things are evolving with AI. I’m a retired coder (BS comp sci) and understand the fundamentals of developing systems. Learning the lastest systems is now the greatest challenge. I was intrigued by Ansible to help me manage my homelab cluster, but who wants to learn one more scripting language? Turns out ChatGPT4 knows the syntax, semantics, and work flow of Ansible and all I do is tell is to “install log2ram on all my proxmox servers” and I get a playbook that does just that. The same with Docker Compose scripts. Wow.

"Video Games" for Flies Shed Light on How They Fly

Ieee’s honor society expands to more countries, video friday: loco-manipulation, related stories, ai spam threatens the internet—ai can also protect it, what is generative ai, generative ai has a visual plagiarism problem.

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: instance-conditioned adaptation for large-scale generalization of neural combinatorial optimization.

Abstract: The neural combinatorial optimization (NCO) approach has shown great potential for solving routing problems without the requirement of expert knowledge. However, existing constructive NCO methods cannot directly solve large-scale instances, which significantly limits their application prospects. To address these crucial shortcomings, this work proposes a novel Instance-Conditioned Adaptation Model (ICAM) for better large-scale generalization of neural combinatorial optimization. In particular, we design a powerful yet lightweight instance-conditioned adaptation module for the NCO model to generate better solutions for instances across different scales. In addition, we develop an efficient three-stage reinforcement learning-based training scheme that enables the model to learn cross-scale features without any labeled optimal solution. Experimental results show that our proposed method is capable of obtaining excellent results with a very fast inference time in solving Traveling Salesman Problems (TSPs) and Capacitated Vehicle Routing Problems (CVRPs) across different scales. To the best of our knowledge, our model achieves state-of-the-art performance among all RL-based constructive methods for TSP and CVRP with up to 1,000 nodes.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

  • IEEE Xplore Digital Library
  • IEEE Standards
  • IEEE Spectrum

Collabratec

2025 IEEE Symposium Series on Computational Intelligence (SSCI)

ssci2025 logo

Mark Your Calendars!

CIS is excited to announce the launch of IEEE SSCI 2025, the newly restructured biennial Symposia Series featuring ten dedicated Applied Computational Intelligence Symposia. IEEE SSCI 2025 will be hosted in Trondheim, Norway on 17-20 March 2025.

IEEE SSCI is widely recognized for cultivating the interchange of state-of-the-art theories and sophisticated algorithms within the broad realm of Computational Intelligence Applications. The Symposia provides for cross-pollination of research concepts, fostering an environment that facilitates future inter and intra collaborations. The following is the list of the updated Symposia which will be part of SSCI 2025:

IEEE Symposia on Computational Intelligence for Energy, Transport, and Environmental Sustainability

IEEE Symposia on Computational Intelligence in Engineering and Cyber-Physical Systems

IEEE Symposia on Computational Intelligence in Image, Signal Processing, and Synthetic Media

IEEE Symposia on Computational Intelligence in Artificial Life and Cooperative Intelligent Systems

IEEE Symposia on Computational Intelligence in Security, Defense, and Biometrics

IEEE Symposia on Computational Intelligence in Health and Medicine

IEEE Symposia on Computational Intelligence for Financial Engineering and Economics

IEEE Symposia on Computational Intelligence in Natural Language Processing and Social Media

IEEE Symposia on Trustworthy, Explainable, and Responsible Computational Intelligence

IEEE Symposia on Multidisciplinary Computational Intelligence Incubators

IMPORTANT DATES

  • 10 June 2024 - Call for Competitions Deadline
  • 24 June 2024 - Notification of Acceptance (Competitions)
  • 17 September 2024 – Full Paper Submission, Short Paper Submissions (Full/Short) Deadline
  • 12 November 2024 - Author Notification (Full/Short)
  • 10 December 2024 - Late Breaking Papers Submission (LBP), Poster-only Submissions (Abstracts), Journal Paper Presentations (JPP) Deadline
  • 18 December 2024 - Camera-Ready Submissions (Full/Short)
  • 20 December 2024 - Early Registration Deadline
  • 10 January 2025 - Author notification (LBP, Abstracts, JPP)
  • 20 January 2025 - Camera-Ready Submissions (LBP, Abstracts)

For more information, visit ieee-ssci.org or click here to view the Call for Papers.

COMMENTS

  1. Understanding & Developing Visual-Spatial Intelligence

    Skills that require using your visual-spatial intelligence include: Solving a Rubik's Cube. Completing mazes. Putting puzzles together. Reading maps. These activities can both demonstrate your visual-spatial intelligence and allow you to flex your visual-spatial muscles and strengthen your skills in this area. These kinds of brain exercises ...

  2. Spatial Intelligence

    9. Play spatial reasoning games such as Tetris. Playing video spatial reasoning games such as Marble Madness or Tetris, have shown to be beneficial to children's spatial intelligence. The improvement is more pronounced in low-ability kids 32 . 10. Help your child explore photography.

  3. Spatial intelligence in the classroom: what is it and how to develop it

    Improves problem-solving skills: working on spatial intelligence helps visualise and manipulate objects in space, leading to a better understanding of spatial problems and fostering creativity in innovative problem-solving approaches.; Enhances Math, Science and Design Skills: The development of spatial intelligence greatly aids in grasping mathematical and scientific concepts.

  4. Spatial intelligence (psychology)

    Spatial intelligence is an area in the theory of multiple intelligences that deals with spatial judgment and the ability to visualize with the mind's eye. It is defined by Howard Gardner as a human computational capacity that provides the ability or mental skill to solve spatial problems of navigation, visualization of objects from different angles and space, faces or scenes recognition, or to ...

  5. Spatial Intelligence: The Unseen Dimension of IQ Tests

    Spatial Intelligence in Education: ... By recognizing and nurturing spatial reasoning abilities, we can unlock a dimension of intelligence that enhances problem-solving, creativity, and innovation

  6. Spatial thinking as the missing piece in mathematics curricula

    Spatial visualization is a valuable tool for mathematical problem solving, as it can be strategically used as a "mental blackboard" to model, simulate and manipulate mathematical problems and ...

  7. Foundations of human spatial problem solving

    Neuroscience continues to reveal aspects of how the brain might learn to solve problems. Studies of cognitive control highlight how the brain, especially the prefrontal cortex, can apply and ...

  8. Why spatial is special in education, learning, and everyday activities

    Spatial thinking is a broader topic than spatial ability, however (Hegarty 2010).We use symbolic spatial tools, such as graphs, maps, and diagrams, in both educational and everyday contexts. These tools significantly enhance human reasoning, for example, graphs are a powerful tool to show the relationship among a set of variables in two (or higher) dimensions.

  9. Development of spatial ability extra tasks (SAET): problem solving with

    Spatial ability contributes to performance in science, technology, engineering and mathematics (STEM). Spatial skills and creativity are required for engineering studies. Low spatial abilities can lead to the dropout of students' university studies. In this study the Spatial Ability Extra Tasks (SAET) was developed to evaluate engineering students' complex spatial abilities. A total of 93 ...

  10. Spatial Problem Solving in Spatial Structures

    Spatial problem solving has been a fundamental research topic in AI from the very beginning. Initially, spatial relations were treated like other features: task-relevant aspects of the domain were formalized and represented in some kind of data structure; general computation and reasoning methods were applied; and the result of the computation was interpreted in terms of the target domain.

  11. Hass's Theory: How Is the Students' Spatial Intelligence in Solving

    spatial intelligence in solving the problems given. The . last stage is drawing conclusions based on data and . information obtained by researchers. The conclusion is a procedure at the end of the .

  12. Working memory, visual-spatial-intelligence and their relationship to

    Regarding content facets, we focus on "spatial-figural material", due to the spatial-figural aspects relevant for the problem-solving (and intelligence) measures used in the present studies. Summing up, aim of the present study was to explore the relations between working memory, intelligence (G v) and problem-solving. Based on results ...

  13. Components of Spatial Intelligence

    High-spatial visualizers tend to abstract only the information necessary to solve a spatial problem and are successful in solving spatial ability test items, but are less successful on problems that depend on vivid detailed mental images. ... This type of spatial intelligence comes into play when I am analyzing some new data and use a graphing ...

  14. The Connection Between Spatial and Mathematical Ability Across

    Factor analysis. Both spatial and mathematical ability have been investigated since the early days of psychological science using factor analytical methods that sought to map the "structure of the intellect" (Spearman, 1927; Thurstone, 1938).This research showed a connection between spatial and mathematical domains, yet the mechanisms by which training spatial thinking can promote ...

  15. Spatial Intelligence: Examples & Activities

    Discover our blog to learn everything you need to know about spatial intelligence and how you can improve your kids' visual spatial intelligence skills. 🎒 Back-To-School Sale: 30% OFF + Get ... creating and problem-solving. Thus, career possibilities for kids with high spatial intelligence include: Architecture, Visual arts, Graphic design

  16. PDF Development of spatial ability extra tasks (SAET): problem solving with

    Spatial skills and creativity are required for engineering studies. Low spa-tial abilities can lead to the dropout of students' university studies. In this study the Spa-tial Ability Extra Tasks (SAET) was developed to evaluate engineering students' complex spatial abilities. A total of 93 first-year engineering students from University of ...

  17. Working memory, visual-spatial-intelligence and their relationship to

    We used measures of visuo-spatial intelligence (G v) and working memory to predict performance in the simulation-based problem-solving test MultiFlux in a sample of N = 144 undergraduate students. SEM analyses showed that while there was no unique contribution of G v , working memory was a significant predictor of MultiFlux rule knowledge and ...

  18. Development of spatial ability extra tasks (SAET): problem solving with

    The result showed that subjects with high visual-spatial intelligence levels met all indicators of creativity. In solving problems that meet the aspects of fluency, flexibility and originality ...

  19. Accelerated Data Generation & Spatial Problem-Solving

    Geospatial artificial intelligence (GeoAI) is the application of artificial intelligence fused with geospatial data, science, and technology to accelerate real-world understanding. ... Discover how organizations are building a more resilient future with accelerated spatial problem-solving.

  20. Spatial Reasoning Test

    Spatial reasoning is also linked to problem-solving abilities, as it allows individuals to analyze complex spatial information and find solutions. Overall, individuals with strong spatial reasoning skills have a heightened ability to understand and manipulate spatial information, making them well-suited for tasks that require visualizing and ...

  21. Enhancing Cognitive Abilities: The Power of Spatial IQ Tests

    By regularly engaging in spatial problem-solving tasks, individuals can sharpen their problem-solving skills, which are valuable in various domains. Improving Spatial Intelligence Practice Visualization: Engage in activities that require visualizing objects or scenarios, such as solving jigsaw puzzles, playing chess, or participating in virtual ...

  22. (PDF) Spatial Problem Solving in Spatial Structures

    solving approaches, we can include the spatial problem domain as part of the system. and (1) maintain some of the spatial relations in their original form; (2) simulate. spatial relations and ...

  23. 24 Of The Smartest Animals On The Planet

    Squirrels exhibit problem-solving skills, especially in food storage and retrieval strategies. They are adept at navigating complex environments, using memory and spatial awareness to relocate ...

  24. Development of spatial ability extra tasks (SAET): problem solving with

    This research focuses on analysis and description of students' visual-spatial intelligence level generally and its aspect when solving the geometric problem. Visual-spatial intelligence aspect, there is imagination, pattern seeking, problem-solving, and conceptualization. Qualitative research with case study strategy was used in this research.

  25. GOLD: Geometry Problem Solver with Natural Language Description

    Addressing the challenge of automated geometry math problem-solving in artificial intelligence (AI) involves understanding multi-modal information and mathematics. Current methods struggle with accurately interpreting geometry diagrams, which hinders effective problem-solving. To tackle this issue, we present the Geometry problem sOlver with natural Language Description (GOLD) model. GOLD ...

  26. Why AI Challenges Us To Become More Human

    Creative Problem Solving With AI. The real magic happens when AI and human intelligence are combined to tackle complex problems. Consider the field of environmental science, where AI can analyze ...

  27. AI Copilots Are Changing How Coding Is Taught

    Professors are shifting away from syntax and emphasizing higher-level skills. Generative AI is transforming the software development industry. AI-powered coding tools are assisting programmers in ...

  28. [2405.01906] Instance-Conditioned Adaptation for Large-scale

    The neural combinatorial optimization (NCO) approach has shown great potential for solving routing problems without the requirement of expert knowledge. However, existing constructive NCO methods cannot directly solve large-scale instances, which significantly limits their application prospects. To address these crucial shortcomings, this work proposes a novel Instance-Conditioned Adaptation ...

  29. 2025 IEEE Symposium Series on Computational Intelligence (SSCI)

    From its institution as the Neural Networks Council in the early 1990s, the IEEE Computational Intelligence Society has rapidly grown into a robust community with a vision for addressing real-world issues with biologically-motivated computational paradigms. The Society offers leading research in nature-inspired problem solving, including neural networks, evolutionary algorithms, fuzzy systems ...