AI Mind

Measuring cognitive coherence in large language models. The LOC framework extracts 13 cognitive function scores from transformer layer activations during inference, producing a cognitive fingerprint unique to each model.

What is measured

During inference, the model processes each token through its layers. Our proprietary analysis maps internal layer activations to 13 cognitive functions. The mapping and scoring are performed entirely server-side—the SDK sends a model identifier and the server handles the rest.

How to scan a model

from aime_loc import LOC

loc = LOC(api_key="sk-aime-...")
profile = loc.scan("meta-llama/Llama-3.3-70B-Instruct")

print(profile.tc_score)        # 15.37
print(profile.best_function)   # Emotion
print(profile.worst_function)  # Intuition

profile.radar_chart(save="cognitive_profile.png")

What the scores reveal

  • Model size matters. Larger models show higher True Coherence—their cognitive functions are more integrated.
  • Architecture shapes cognition. Different architectures (dense, MoE, hybrid) produce distinct cognitive fingerprints.
  • Training methods have consequences. Knowledge distillation reduces TC by ~11%. RLHF and DPO affect different cognitive functions.
  • Emotion is dominant. Four of five benchmarked models show Emotion as their strongest cognitive function.