Quickstart¶
Install the Package¶
pip install triadic-head
Wrap a Model¶
from triadic_head import TriadicWrapper
# Wrap any HuggingFace causal LM
model = TriadicWrapper("gpt2", n_bits=64, align_mode="infonce")
Encode Concepts¶
sigs = model.encode(["king", "queen", "dog", "cat"])
# Each concept gets a prime composite integer
Compare¶
result = model.compare("king", "queen")
print(result)
# {'similarity': 0.89, 'shared_factors': [2, 3, 5, ...], ...}
Validate¶
report = model.validate()
# Checks diversity, active bits, ordering across groups
Explore¶
model.explore(["king", "queen", "dog", "cat"], show_factors=True)
# Pairwise similarity heatmap with prime factor details
Pure Algebra (No PyTorch)¶
The algebra module works without PyTorch:
from triadic_head.algebra import PrimeMapper, TriadicValidator
mapper = PrimeMapper(n_bits=8)
validator = TriadicValidator()
# Map binary vectors to prime composites
phi_a = mapper.bits_to_composite([1, 0, 1, 1, 0, 0, 1, 0])
phi_b = mapper.bits_to_composite([1, 1, 1, 0, 0, 0, 1, 0])
# Algebraic operations
print(validator.subsumes(phi_a, phi_b))
print(validator.compose(phi_a, phi_b))
print(validator.gap_analysis(phi_a, phi_b))
Training from Scratch¶
For full training instructions, see the Training Guide.
# Minimal training loop
model.freeze_backbone()
for batch in dataloader:
logits, triadic_proj, lang_loss = model(batch["input_ids"], labels=batch["input_ids"])
tri_loss = model.triadic_loss(triadic_proj, input_ids=batch["input_ids"])
total_loss = lang_loss + tri_loss
total_loss.backward()
optimizer.step()
optimizer.zero_grad()
Next Steps¶
- Training Guide -- two-phase protocol, alignment modes, alpha tuning
- Algebraic Operations -- all 8 operations with examples
- API Reference -- complete module documentation