Lua Vision Research · Theory

π-Constrained Dimensionality: Number-Theoretic Principles for Non-Standard Hidden Dimensions in Transformer Architectures

Paulo Câmara
Lua Vision Tecnologia · São Paulo, Brazil
Published: September 2025 · Revised: March 2026
Abstract

We present a formal framework for selecting hidden dimensions in transformer architectures using number-theoretic principles rather than the conventional power-of-2 heuristic. Our approach, termed π-constrained dimensionality, derives valid dimensions from a constraint system involving prime factorization patterns, coprimality relationships between architectural parameters, and a novel capacity metric that quantifies representational efficiency per parameter. We demonstrate that dimensions such as 4608, 3456, and 2592 satisfy these constraints and yield 4-8% improvements in downstream task performance compared to standard dimensions at equivalent parameter counts. The framework also prescribes the RoPE frequency base as θ = 10000π ≈ 31415.926, which we show maximizes positional resolution across practical context lengths. Our results suggest that decades of convention around powers-of-2 have left significant performance on the table.

1. Introduction

Every major language model in production—GPT-4, Gemini, Claude, Llama, Mistral—uses hidden dimensions that are powers of 2 or simple multiples thereof: 4096, 5120, 6144, 8192. This convention stems from practical considerations in early neural network implementations: memory alignment, efficient matrix multiplication on SIMD hardware, and simple divisibility for multi-head attention. However, no theoretical result demonstrates that these dimensions are optimal for information representation.

We ask a fundamental question: what hidden dimension maximizes the information-theoretic capacity of a transformer layer for a given parameter budget? This question has received surprisingly little attention in the literature, perhaps because the marginal gains from small dimension changes seem unlikely to justify the engineering complexity. We demonstrate that this intuition is incorrect—the choice of hidden dimension interacts nonlinearly with gradient dynamics, weight matrix conditioning, and representational geometry in ways that compound across deep architectures.

Our key contributions:

  1. A formal constraint system (the π-constraints) that identifies valid hidden dimensions from number-theoretic first principles.
  2. A capacity metric (IC-score) that quantifies representational efficiency per parameter for candidate dimensions.
  3. Derivation of the RoPE frequency base θ = 10000π from positional resolution maximization.
  4. Empirical validation showing 4-8% improvements over standard dimensions at equivalent FLOPs.

2. Theoretical Framework

2.1 The Dimension Selection Problem

Consider a transformer layer with hidden dimension d. The layer contains approximately 12d² parameters (4d² for self-attention projections Q, K, V, O; 8d² for the FFN assuming 4× expansion). The representational capacity of this layer is not simply proportional to d²—it depends on the geometric structure of the weight matrices and how effectively gradient-based optimization can navigate the loss landscape.

We define the Information Capacity Score (IC-score) of a dimension d as:

IC(d) = R(d) · S(d) · H(d) / P(d)

where R(d) is the stable rank achievable by d×d random matrices under gradient descent, S(d) is a smoothness factor related to gradient flow, H(d) is a hardware utilization coefficient, and P(d) = 12d² is the parameter count.

2.2 Stable Rank and Dimension Choice

The stable rank of a matrix W is defined as r_stable(W) = ‖W‖²_F / ‖W‖²_2, which measures the effective dimensionality of the transformation independent of the ambient dimension. For randomly initialized matrices under gradient descent, the stable rank achieved at convergence depends on the ambient dimension d in non-trivial ways.

Empirically, we observe that dimensions with rich prime factorization structures achieve higher stable rank after training. Specifically, if d = p₁^a₁ · p₂^a₂ · ... · pₖ^aₖ, dimensions where the p_i are small primes (2, 3) and the factorization allows multiple valid head-count configurations tend to maintain higher rank throughout training. Our hypothesis is that these dimensions create weight matrices with less degenerate eigenvalue distributions due to the variety of subspace decompositions available during optimization.

2.3 The π-Constraint System

We formalize the following necessary conditions for a dimension d to achieve near-optimal IC-score:

Constraint 1 (Factorization): d must factor as d = 2^a · 3^b · c where a ≥ 6, b ≥ 1, and c ∈ {1, other small primes}. The requirement a ≥ 6 ensures 64-byte alignment for modern vector processors.

Constraint 2 (Head Divisibility): d must support at least 3 valid head configurations {h₁, h₂, h₃} where d/h_i is integral and d/h_i ≥ 64 for all i. This enables head-count ablation without architectural redesign.

Constraint 3 (Coprimality): If the model uses E experts in MoE, then gcd(d, E) must satisfy specific relationships that prevent routing collapse. Specifically, d mod E should not equal 0—the dimension and expert count should be coprime or near-coprime.

Constraint 4 (RoPE Compatibility): The head dimension d_h = d/n_heads must be expressible as d_h = 2^m · k where k is odd and m ≥ 4. This ensures efficient rotary position embedding computation.

2.4 Derivation of Valid Dimensions

Applying these constraints to the range d ∈ [2048, 8192] yields the following valid dimensions (sorted by IC-score):

DimensionFactorizationValid HeadsIC-Scorevs 4096 Baseline
46082⁹ · 3²36, 32, 24, 181.073+7.3%
34562⁷ · 3³27, 24, 18, 121.058+5.8%
38402⁸ · 3 · 530, 24, 20, 151.041+4.1%
25922⁵ · 3⁴18, 16, 12, 91.039+3.9%
40962¹²32, 16, 81.000baseline
51202¹⁰ · 540, 32, 200.987-1.3%
81922¹³64, 32, 160.961-3.9%

Notable: the three dimensions selected for the Genesys PI family (4608, 3456, 2592) are precisely the top-scoring candidates at their respective parameter scales. The dimension 4096, despite being the most common in practice, ranks below all three.

2.5 RoPE Frequency Derivation

Rotary Position Embedding (RoPE) encodes position through rotation matrices parameterized by frequencies θ_i = θ_base^(-2i/d_h) for i ∈ [0, d_h/2). The base frequency θ_base determines the resolution-bandwidth trade-off: higher θ_base provides finer positional discrimination at short distances but poorer discrimination at long distances.

We seek the θ_base that maximizes a positional resolution integral over a target context window [1, L]:

PR(θ_base, L) = ∫₁ᴸ min_i |sin(θ_i · t)| dt / L

For L = 32768 (our target context) and d_h = 128, numerical optimization yields θ_base* ≈ 31415.9, which is strikingly close to 10000π. We adopt θ = 10000π = 31415.926... as an exact closed-form solution that provides near-optimal positional resolution while offering elegant theoretical interpretation: the standard RoPE base of 10000 is scaled by π, the fundamental constant of circular (rotary) geometry.

Empirically, this choice yields 2-4% improvement in tasks requiring precise long-range position awareness (e.g., needle-in-a-haystack retrieval at 16K+ positions) compared to the standard θ = 10000.

3. Empirical Validation

3.1 Controlled Comparisons

To isolate the effect of dimension choice, we trained matched pairs of models differing only in hidden dimension (with all other hyperparameters adjusted to maintain equivalent parameter count):

ConfigurationParamsPerplexity (C4)MMLUHumanEval
d=4096, 32 layers6.7B8.3464.2%41.5%
d=3456, 38 layers6.7B7.9167.8%45.1%
d=4096, 48 layers12.1B7.1271.4%52.8%
d=4608, 42 layers12.1B6.7874.6%56.3%

At both scales, π-constrained dimensions outperform standard dimensions at matched parameter counts. The improvement is consistent across diverse evaluation benchmarks.

3.2 Gradient Flow Analysis

We measured the gradient signal-to-noise ratio (gSNR) across layers during training. Models with π-constrained dimensions maintain 18-23% higher gSNR in deep layers (layer 30+), suggesting improved gradient propagation. This correlates with the higher stable rank observations—the weight matrices in constrained-dimension models maintain richer eigenvalue spectra that facilitate gradient flow through the network.

4. Connection to Neuroscience

The π-constraint framework draws philosophical inspiration from biological neural coding. The human neocortex does not operate in powers-of-2 dimensions—cortical representations are estimated at ~10,000 dimensions for visual processing, ~3,000-5,000 for hippocampal spatial coding, and ~1,000-2,000 for motor planning. These dimensions appear to be "tuned" by evolution to maximize information capacity given metabolic constraints.

Our constraint system can be viewed as an artificial analogue: given the "metabolic constraint" of a fixed parameter budget and hardware memory, what dimensions maximize the model's representational capacity? The answer, like in biology, turns out to be non-trivial and non-obvious from first principles.

5. Practical Considerations

5.1 Hardware Compatibility

Non-standard dimensions raise legitimate concerns about hardware efficiency. Modern GPU/accelerator tensor cores operate on tiles of specific sizes (e.g., 16×16 for NVIDIA A100 fp16, 64×64 for AMD MI300X fp16). All π-constrained dimensions in our framework are divisible by 64 (the largest common tile size), ensuring no wasted computation from padding.

Measured throughput on AMD MI300X shows less than 2% penalty for d=4608 vs d=4096 in matrix multiplication, far smaller than the 7.3% quality improvement.

5.2 Scaling Predictions

The IC-score framework makes testable predictions for dimensions beyond our current architecture search. For a hypothetical model at d=6144 (a common choice for 34B-parameter models), our framework predicts that d=6048 (= 2⁶ · 3³ · 7) or d=6912 (= 2⁸ · 3³) would yield 3-5% improvements. We leave validation of these predictions to future work.

6. Conclusion

The choice of hidden dimension in transformer architectures is far from a trivial engineering detail. Our π-constraint framework provides a principled method for dimension selection that yields consistent improvements of 4-8% over conventional choices. Combined with the derived RoPE frequency θ = 10000π, these number-theoretic principles form the mathematical foundation of the Genesys PI model family.

We believe this work opens a new axis for architecture optimization that is orthogonal to scaling—improving model quality without increasing cost. The implication is clear: the low-hanging fruit of dimension optimization has been systematically ignored by the field for over a decade.

References

  1. Vaswani, A., et al. (2017). Attention is all you need. NeurIPS 2017.
  2. Su, J., et al. (2024). RoFormer: Enhanced transformer with rotary position embedding. Neurocomputing, 568.
  3. Kaplan, J., et al. (2020). Scaling laws for neural language models. arXiv:2001.08361.
  4. Hoffmann, J., et al. (2022). Training compute-optimal large language models. arXiv:2203.15556.
  5. Arora, S., et al. (2018). On the optimization of deep networks: Implicit acceleration by overparameterization. ICML 2018.
  6. Saxe, A. M., et al. (2014). Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. ICLR 2014.
  7. Kumar, A., et al. (2023). Grokking as the transition from lazy to rich training dynamics. ICLR 2023.
  8. Press, O., et al. (2022). Train short, test long: Attention with linear biases enables input length generalization. ICLR 2022.
  9. Henighan, T., et al. (2020). Scaling laws for autoregressive generative modeling. arXiv:2010.14701.
  10. Elhage, N., et al. (2022). Softmax linear units. Transformer Circuits Thread, Anthropic.