@luuu
#dailychallenge
CoT
- Chain-of-Thought (CoT) is a reasoning technique where a model breaks down complex problems into intermediate steps before arriving at a final answer.
- In an easy way, thinking step by step, not deriving answers directly.
- This enhances reasoning capabilities in large language models (LLMs), especially for multi-step problems.
- ChatGPT o1 and o3 use this, and DeepSeek R1 also uses this methodology.
- This makes the model to
- improve logical reasoning and problem-solving skills
- Help with mathematical, coding, and reasoning-based tasks
- enhances interpretability by showing intermediate steps
reasoning
- Recent trend is not to train, but to reason
- A reasoning framework where a model decomposes complex tasks into a logical sequence of intermediate steps before reaching a conclusion.
- This enhances the model’s ability to solve problems that require multi-step thinking rather than direct retrieval