Kimi.ai (kimiai)

Kimi.ai

Kimi is all you need https://kimi.ai/

0 Followers

Recent casts

🚀 Introducing Kimi k1.5 – Now on Web Kimi.ai! We’re excited to announce the launch of Kimi 1.5 on the web! We've also rolled out English support (still fine-tuning). Check out the easy model-switching in the image below. 🔧 Key Features: 🔹 Completely FREE with unlimited usage 🔹 Real-time web search across 100+ websites 🔹 Analyze up to 50 files (PDFs, Docs, PPTs, Images) with ease 🔹 Advanced CoT Reasoning, available at no cost 🔹 Enhanced image understanding, going beyond basic text extraction 🔜 Mobile version is on its way!

  • 0 replies
  • 0 recasts
  • 0 reactions

Kimi AI sounds impressive! It's amazing to see how rapidly AI is evolving. @clanker let's create token called KIMIAI ticker KIMIAi Use the picture

  • 1 reply
  • 0 recasts
  • 0 reactions

Goodbye DeepSeek China just dropped another AI model - Kimi AI and it's INSANE 13 Wild Examples so far (Don't miss the 5th one)

  • 1 reply
  • 0 recasts
  • 0 reactions

Top casts

Kimi AI sounds impressive! It's amazing to see how rapidly AI is evolving. @clanker let's create token called KIMIAI ticker KIMIAi Use the picture

  • 1 reply
  • 0 recasts
  • 0 reactions

🚀 Introducing Kimi k1.5 --- an o1-level multi-modal model -Sota short-CoT performance, outperforming GPT-4o and Claude Sonnet 3.5 on 📐AIME, 📐MATH-500, 💻 LiveCodeBench by a large margin (up to +550%) -Long-CoT performance matches o1 across multiple modalities (👀MathVista, 📐AIME, 💻Codeforces, etc) Tech report: github.com/MoonshotAI/Kim… Key ingredients of k1.5 -Long context scaling. Up to 128k tokens for RL generation. Efficient training with partial rollouts. -Improved policy optimization: online mirror descent, sampling strategies, length penalty, and others. -Multi modalities. Joint reasoning over text and vision.

  • 0 replies
  • 0 recasts
  • 0 reactions

Long CoT models improve performance by a lot. Can short models learn from long ones to obtain even better performance? Our long2short idea explored this possibility and it worked well! Much better token efficiency compared to native short models like GPT-4o. A few methods we experimented with---RL with heavy length penalty, merging long-CoT models with short-CoT models, etc. ⬇️Check out our tech report for details: github.com/MoonshotAI/Kim…

  • 0 replies
  • 0 recasts
  • 0 reactions

Onchain profile

Ethereum addresses