ACT-R Model Builder

SKILL.md

ACT-R Model Builder

Purpose

This skill encodes expert knowledge for constructing computational cognitive models within the ACT-R (Adaptive Control of Thought -- Rational) architecture. It provides guidance on chunk type definition, production rule authoring, subsymbolic parameter selection with empirically validated defaults, model fitting workflows, and validation procedures. A general-purpose programmer would not know the architecture constraints, parameter defaults, or model validation standards without specialized cognitive modeling training.

When to Use This Skill

  • Designing a new ACT-R model for a cognitive task (memory retrieval, decision-making, skill acquisition)
  • Setting subsymbolic parameters and understanding their theoretical justification
  • Structuring chunk types and production rules for a specific experimental paradigm
  • Fitting an ACT-R model to behavioral data (RT, accuracy)
  • Validating a model via parameter recovery, cross-validation, or qualitative predictions
  • Choosing between ACT-R 7.x (Lisp) and pyactr (Python) for implementation

Research Planning Protocol

Before executing the domain-specific steps below, you MUST:

  1. State the research question -- What specific question is this analysis/paradigm addressing?
  2. Justify the method choice -- Why is this approach appropriate? What alternatives were considered?
  3. Declare expected outcomes -- What results would support vs. refute the hypothesis?
  4. Note assumptions and limitations -- What does this method assume? Where could it mislead?
  5. Present the plan to the user and WAIT for confirmation before proceeding.

For detailed methodology guidance, see the research-literacy skill.

⚠️ Verification Notice

This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.

ACT-R Architecture Overview

ACT-R is a hybrid cognitive architecture with symbolic and subsymbolic components (Anderson, 2007; Anderson & Lebiere, 1998).

Core Modules and Buffers

Module Buffer Function Source
Declarative memory retrieval Stores and retrieves chunks (facts) Anderson, 2007, Ch. 2
Procedural memory (none; fires productions) Stores production rules (skills) Anderson, 2007, Ch. 3
Goal goal Tracks current task state Anderson, 2007, Ch. 4
Imaginal imaginal Holds intermediate problem representations Anderson, 2007, Ch. 4
Visual visual, visual-location Attends to and encodes visual objects Anderson, 2007, Ch. 6
Motor manual Executes motor responses (keypresses) Anderson, 2007, Ch. 6
Temporal temporal Tracks time intervals Taatgen et al., 2007

Processing Cycle

  1. Buffers hold one chunk each (the "bottleneck" assumption; Anderson, 2007, Ch. 1)
  2. Productions match against buffer contents (pattern matching)
  3. Conflict resolution selects one production per cycle (~50 ms per production firing; Anderson, 2007)
  4. Selected production modifies buffers or makes requests to modules
  5. Modules process requests asynchronously

Building the Symbolic Model

Step 1: Define Chunk Types

Chunk types define the structure of declarative knowledge:

;; ACT-R 7.x Lisp syntax
(chunk-type addition-problem arg1 arg2 answer)
(chunk-type counting-fact number next)

Decision rules for chunk type design:

  1. Each chunk type represents one category of knowledge (Anderson, 2007, Ch. 2)
  2. Slots should correspond to meaningful features of the domain
  3. Use inheritance when chunk types share structure (e.g., a "problem" parent type)
  4. Keep chunks small -- typically 3-6 slots per chunk type (Anderson & Lebiere, 1998)

Step 2: Write Production Rules

Productions follow an IF-THEN structure:

(p retrieve-answer
 =goal>
 isa addition-problem
 arg1 =num1
 arg2 =num2
 answer nil
 ?retrieval>
 state free
==>
 +retrieval>
 isa addition-fact
 addend1 =num1
 addend2 =num2
 =goal>
)

Production rule guidelines:

Guideline Rationale Source
One request per production Module bottleneck constraint Anderson, 2007, Ch. 3
Test buffer state before requesting Prevents jamming the module Bothell, 2023, ACT-R reference manual
Use =goal> to maintain goal buffer Prevents goal harvesting Bothell, 2023
Minimize productions per task step Simpler models are preferred (parsimony) Anderson, 2007, Ch. 1

Step 3: Structure the Goal Stack

Is the task sequential with clear phases?
 |
 +-- YES --> Use a single goal chunk with a "step" slot
 | that tracks the current phase
 |
 +-- NO --> Does the task require subgoaling?
 |
 +-- YES --> Use goal push/pop (stack)
 |
 +-- NO --> Use the imaginal buffer for
 intermediate representations

Subsymbolic Parameters

These parameters govern memory activation, retrieval, and production selection. See references/parameter-table.yaml for the complete table.

Core Declarative Memory Parameters

Parameter Symbol Default Typical Range Source
Base-level learning decay d 0.5 0.1 -- 1.0 Anderson & Schooler, 1991; Anderson, 2007
Activation noise s 0.4 0.1 -- 0.8 Anderson, 2007
Latency factor F 1.0 0.2 -- 5.0 Anderson, 2007
Latency exponent f 1.0 Fixed in most models Anderson, 2007
Retrieval threshold tau -infinity (default) Set empirically; often 0.0 to -2.0 Anderson, 2007
Maximum associative strength S (mas) context-dependent 1.0 -- 5.0 Anderson & Reder, 1999
Mismatch penalty P application-dependent 0.5 -- 2.0 Anderson, 2007

Production Utility Parameters

Parameter Symbol Default Typical Range Source
Utility noise sigma 0.0 (deterministic) 0.1 -- 2.0 when enabled Anderson, 2007
Utility learning rate alpha 0.2 0.01 -- 1.0 Anderson, 2007
Initial utility U0 0.0 Set per production Anderson, 2007
Production compilation enabled/disabled Disabled by default -- Taatgen & Anderson, 2002

Timing Parameters

Parameter Value Source
Production cycle time 50 ms Anderson, 2007
Visual encoding time 85 ms Anderson, 2007, Ch. 6
Motor initiation time 50 ms Anderson, 2007, Ch. 6
Motor execution time 100 ms (Fitts' law applies) Anderson, 2007, Ch. 6
Imaginal delay 200 ms Anderson, 2007, Ch. 4

Activation Equation

Total activation of chunk i:

A_i = B_i + sum_j(W_j * S_ji) + PM_i + noise

Where:

  • B_i = base-level activation (log of weighted recency; decay d; Anderson & Schooler, 1991)
  • W_j * S_ji = spreading activation from source j (Anderson, 2007, Ch. 5)
  • PM_i = partial matching component (Anderson, 2007)
  • noise = logistic noise with scale s (Anderson, 2007)

Retrieval time: RT = F * e^(-f * A_i) (Anderson, 2007)

Model Fitting Workflow

Step 1: Identify Free Parameters

How many free parameters?
 |
 +-- <= 3 --> Standard practice; proceed
 |
 +-- 4-6 --> Acceptable if justified by model complexity
 |
 +-- > 6 --> Warning: overfitting risk. Consider fixing some
 to default values (Anderson, 2007)

Rule of thumb: The number of free parameters should be substantially less than the number of independent data points being fit (Roberts & Pashler, 2000).

Step 2: Choose Fitting Method

Method When to Use Source
Grid search Few parameters (1-3), bounded space Standard practice
Simplex (Nelder-Mead) Moderate parameters, smooth landscape Anderson, 2007
Differential evolution Many parameters, multimodal landscape Storn & Price, 1997
Bayesian optimization Expensive evaluations, informed priors Palestro et al., 2018

Step 3: Fit to Multiple Dependent Variables

ACT-R models should simultaneously account for:

  • Response times (correct trials, mean or quantiles)
  • Accuracy (proportion correct by condition)
  • Qualitative patterns (error types, learning curves)

Use weighted sum of squared deviations or log-likelihood across measures (Anderson, 2007, Ch. 4).

Step 4: Parameter Recovery

Before trusting fitted parameter values, conduct a parameter recovery study. See the parameter-recovery-checker skill.

Common Model Patterns

See references/model-patterns.md for detailed implementations of:

  1. Memory retrieval -- Paired associates, fan effect (Anderson, 2007, Ch. 5)
  2. Skill acquisition -- Production compilation, power law of practice (Taatgen & Anderson, 2002)
  3. Decision-making -- Instance-based learning, utility-based selection (Gonzalez et al., 2003)
  4. Problem solving -- Means-ends analysis, goal stacking (Anderson, 2007, Ch. 8)

Model Validation Checklist

Validation Step Method Minimum Standard
Parameter recovery Simulate and refit r > 0.9 between true and recovered (Heathcote et al., 2015)
Cross-validation Fit half, predict half Prediction RMSE within 2x of fitting RMSE
Qualitative predictions Novel conditions Model predicts ordinal pattern correctly
Model comparison AIC/BIC or Bayes factor Compare against plausible alternatives (Burnham & Anderson, 2002)
Sensitivity analysis Vary fixed parameters Conclusions robust to +/-20% variation

Software and Implementation

Platform Language URL Notes
ACT-R 7.x Common Lisp act-r.psy.cmu.edu Reference implementation (Bothell, 2023)
pyactr Python github.com/jakdot/pyactr Python interface, good for batch simulations (Dotlacil, 2018)
jACT-R Java jactr.org Java implementation

Recommendation: Use ACT-R 7.x for model development and validation. Use pyactr when integrating with Python data analysis pipelines or running large parameter sweeps (Dotlacil, 2018).

Common Pitfalls

  1. Too many free parameters: Fitting more than 5-6 free parameters without strong justification risks overfitting (Roberts & Pashler, 2000). Fix well-established parameters (d = 0.5, production cycle = 50 ms) to defaults.
  2. Ignoring parameter correlations: Parameters like s (noise) and tau (threshold) trade off. Run parameter recovery to verify identifiability.
  3. Fitting only means: ACT-R makes distributional predictions. Fitting only mean RT discards information. Use quantile-based fitting where possible (Heathcote et al., 2015).
  4. Incorrect timing alignment: ACT-R's predicted RT includes perceptual and motor times. Account for these when comparing to behavioral RT.
  5. Overly complex models: Prefer models with fewer productions and chunk types that still capture the qualitative pattern. Complexity should be motivated by the data (Anderson, 2007, Ch. 1).
  6. Neglecting model comparison: Always compare your ACT-R model against at least one alternative (simpler ACT-R variant or a different architecture) using formal model comparison (Burnham & Anderson, 2002).

Minimum Reporting Checklist

Based on best practices from Anderson (2007) and Heathcote et al. (2015):

  • Architecture version (e.g., ACT-R 7.27)
  • All chunk types and their slots
  • Number of production rules
  • All parameter values: fixed (with default source) and free (with fitted values and confidence intervals)
  • Fitting method and objective function
  • Data fitted: number of conditions, number of data points, dependent variables
  • Goodness of fit: R-squared, RMSE, or log-likelihood per dependent variable
  • Parameter recovery results (r, bias, RMSE for each free parameter)
  • Model comparison results (AIC/BIC/Bayes factor vs. alternatives)
  • Qualitative predictions and whether they matched data

References

  • Anderson, J. R. (2007). How Can the Human Mind Occur in the Physical Universe? Oxford University Press.
  • Anderson, J. R., & Lebiere, C. (1998). The Atomic Components of Thought. Lawrence Erlbaum Associates.
  • Anderson, J. R., & Reder, L. M. (1999). The fan effect: New results and new theories. Journal of Experimental Psychology: General, 128(2), 186-197.
  • Anderson, J. R., & Schooler, L. J. (1991). Reflections of the environment in memory. Psychological Science, 2(6), 396-408.
  • Bothell, D. (2023). ACT-R 7.27 Reference Manual. Carnegie Mellon University.
  • Burnham, K. P., & Anderson, D. R. (2002). Model Selection and Multimodel Inference (2nd ed.). Springer.
  • Dotlacil, J. (2018). Building an ACT-R reader for eye-tracking corpus data. Topics in Cognitive Science, 10(1), 144-160.
  • Gonzalez, C., Lerch, J. F., & Lebiere, C. (2003). Instance-based learning in dynamic decision making. Cognitive Science, 27(4), 591-635.
  • Heathcote, A., Brown, S. D., & Wagenmakers, E.-J. (2015). An introduction to good practices in cognitive modeling. In B. U. Forstmann & E.-J. Wagenmakers (Eds.), An Introduction to Model-Based Cognitive Neuroscience. Springer.
  • Palestro, J. J., Sederberg, P. B., Osth, A. F., Van Zandt, T., & Turner, B. M. (2018). Likelihood-free methods for cognitive science. Springer.
  • Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107(2), 358-367.
  • Storn, R., & Price, K. (1997). Differential evolution. Journal of Global Optimization, 11(4), 341-359.
  • Taatgen, N. A., & Anderson, J. R. (2002). Why do children learn to say "Broke"? A model of learning the past tense without feedback. Cognition, 86(2), 123-155.
  • Taatgen, N. A., van Rijn, H., & Anderson, J. R. (2007). An integrated theory of prospective time interval estimation. Psychological Review, 114(3), 577-598.

See references/ for detailed parameter tables and common model patterns.

Weekly Installs
0
GitHub Stars
10
First Seen
Jan 1, 1970