Google Introduces Super GEMs in Gemini
Google has launched Super GEMs, a new feature in Gemini that brings Opal-powered workflows directly into Gems Manager, enabling users to build, manage, and share workflows natively inside Gemini.
What’s new in Gems Manager:
- Top section: Pre-built Gems from Google Labs
- Bottom section: User-created and custom Gems
Key highlights:
- Native visual workflow creation inside Gemini
- No external tools required
- Unified access to Google Labs tools and Opal workflows
Super GEMs is currently rolling out to a limited group of users. This update positions Gemini as a more extensible AI workspace, moving beyond chat into structured, reusable workflow automation.
0 replies
0 recasts
0 reactions
Top casts
Not Accepting the Uptrend, the Whale Is Determined to Block the ETH Train 🦈
- 1 hour ago, this Whale wallet swapped 3.8 million USDC from the Ethereum network to Arbitrum via the Mayan Finance protocol. The USDC was immediately transferred to Hyperliquid.
- The Whale then opened a 20x short position on ETH with a value of $2 million. The entry price for this position is $2,568, with a liquidation price at $7,398.
- While the market is feeling optimistic, there are still a few individuals who remain stubbornly bearish and go against market sentiment. Will this Whale score big profits or end up with a bitter loss, folks?
1 reply
0 recasts
25 reactions
$2,700 $ETH
The U.S. Securities and Exchange Commission (SEC) has acknowledged:
“America’s core values economic freedom, private ownership, and a spirit of innovation are all embedded in the DNA of the DeFi (Decentralized Finance) movement.”
We are entering a new era of financial markets. Although the market may fluctuate up or down, DeFi as the backbone of the digital financial ecosystem will continue to grow, especially with clear regulatory frameworks and supportive policies in place.
2 replies
0 recasts
12 reactions
Will the market remain bullish in 2025? Which tokens are worth paying attention to? thanks @icryptoai
0 replies
0 recasts
10 reactions
openai's chief brain Jakub says ai’s bout to go from prompt puppy to full-on phd
currently: ai needs hand-holding—“please write this code,” “please analyze this chart,” like babysitting a genius
but in 5y: it’ll be doing full-blown research on its own, no babysitter
deep research tool already crawls and synthesizes info in mins—early prototype vibes
next step: give it more compute and let it tackle open problems solo
key sauce? reinforcement learning
pre-train = world model from data
RL = teach it how to think, trial/error + human feedback
they’re pushing RL hard—models now solve gnarly stuff like global remote dev scheduling with zero human hand-holding
open question: should pre-train and RL stay separate or merge into one big learning loop?