@fahimintech
1/ MiniMax-M1 just dropped and it’s beefy. we’re talkin' 456B params (with only ~46B active), a wild 1 million token context window, and a hybrid MoE architecture that keeps it lean & mean. it’s like the giga-brain cousin of DeepSeek-R1 on turbo mode 💥