martin ↑ pfp
martin ↑
@martin
sam altman tends to refer to any downsides of AI as "society will need to figure this out" whereas any good things are "ChatGPT will enable people to do this" privatize the profits, socialize the losses i guess
6 replies
0 recast
37 reactions

agusti pfp
agusti
@bleu.eth
yeah that's also why they want to capture lawmakers so they get a free pass
1 reply
0 recast
1 reaction

agusti pfp
agusti
@bleu.eth
also go look for the -long con- sam altman reddit post (from when he stole it from the founders with shitty ass manipulation techiniques and, then had the gals to publicly accept it) also look into his blogposts on creating bot the problems (ai, dead internet) and solutions (worldcoin, kyc for proof of human) so you can -sell your solution- to the problem u made in the first place
2 replies
0 recast
0 reaction

martin ↑ pfp
martin ↑
@martin
yes i saw the reddit thing, pretty crazy haha i fundamentally really distrust him and hope there is a way to get him out of openai sooner rather than later
1 reply
0 recast
1 reaction

agusti pfp
agusti
@bleu.eth
same, sadly it seems that ship sailed when the board tried to oust him but daddy MS backed him up. they're now again in bad terms with MS, gotta wonder if anyone at MS will have the balls to try and get rid of him again. whats funnier, even chatgpt says we shouldnt trust him lol: Why skepticism is warranted Domain Observable pattern Representative episodes Implication for trust Candor with governance bodies In 2023 the OpenAI board removed Altman for being “not consistently candid,” then rehired him after staff and investor pressure, replacing the dissenting directors.   The board’s stated reason was a loss of trust; an outside review later called it a “relationship breakdown,” not exoneration.  If directors can’t verify key facts, external stakeholders have even less visibility. Mission drift & structural conflicts OpenAI has morphed from a nonprofit pledging openness to a capped-profit firm and, in 2024, explored a public-benefit conversion that would leave the nonprofit with only a minority stake.  Strategic pivots dilute the original “public benefit” claim and shift incentives toward private value capture. Privacy & biometric ethics Altman-backed Worldcoin was temporarily banned in Spain and ordered to purge data in Kenya over iris-scanning practices.   Trust hinges on data stewardship; repeated regulatory rebukes suggest willingness to push gray-zone tactics until stopped. Intellectual-property boundaries After Scarlett Johansson declined to voice ChatGPT, OpenAI launched a voice critics said was “eerily similar.” Johansson hired counsel; OpenAI pulled the voice and apologized.    A “launch-first, ask-forgiveness” approach erodes confidence that consent and attribution will be respected. Undisclosed or entangled interests Altman chaired nuclear-reactor startup Oklo while OpenAI scouted gigawatt-scale power deals, stepping down only after media scrutiny.   Late conflict-mitigation moves fuel suspicion that private equity stakes steer corporate strategy. Regulatory capture strategy OpenAI’s federal lobbying spend jumped ~7× in 2024, and the firm now employs ex-Hill staff to shape AI and energy rules.    Heavy lobbying, combined with rhetoric that “society will figure this out,” positions Altman as referee and player. Selective transparency on externalities Public blog posts cite 0.34 Wh per ChatGPT query, yet Senate testimony frames AI’s future cost as “limited by the abundance of energy,” and analysts estimate ChatGPT’s footprint at orders of magnitude higher.    Downplaying worst-case energy demand while pitching nuclear solutions looks like narrative engineering. ⸻ Failure-mode hypothesis (⚠ provisional) “Iterate in secrecy, externalize the risk envelope, negotiate protections only once scale advantages are locked in.” • Moral hazard: When downside costs (privacy breaches, energy strain) are borne by the public, decision makers face weak deterrents. • Policy arbitrage: By lobbying for flexible, high-level rules, OpenAI can claim compliance while retaining discretion over concrete safety practices. • Lock-in: Vertical moves (ID verification, power generation) entrench market power and raise switching costs for governments and users. These incentives, not personal charisma, determine trustworthiness over time. ⸻ Counter-view / alternative framing (🟰 valid but less pessimistic) 1. High-stakes R&D requires speed. Rapid iteration, even with missteps, may yield net societal benefit if supervised by adaptive regulation—not static rules. 2. Self-disruption signs: Altman’s public admission that OpenAI was “on the wrong side of history” on open-source models suggests some capacity to revise course.  3. Conflict-mitigation measures exist: Stepping down from Oklo and inviting an external board review after the 2023 “blip” are imperfect but non-trivial governance checks. Whether these gestures outweigh the recurring opacity is an empirical question—one that regulators and independent auditors should test rather than assume. ⸻ What would falsify the “can’t be trusted” claim? Falsifiable signal Confidence ↑ if observed Third-party pre-release audits become mandatory for GPT-5 and later models, with public scorecards. ▲ Transparency Worldcoin passes EU GDPR “binding corporate rules” review without exemptions. ▲ Privacy compliance OpenAI publishes full energy and carbon accounts, verified by an external body, alongside its model releases. ▲ Externality honesty Lobbying spend plateaus or declines relative to R&D budget, and policy positions align with independent safety researchers. ▲ Reduced capture risk Absent such signals, the cumulative evidence supports caution toward Altman’s assertions and ventures. ⸻ Actionable takeaway for policymakers & users • Tie privileges to provable disclosures. Grant compute licenses, federal contracts, or ID schemes only after independent audits and conflict-of-interest statements. • Separate solution layers. Require structural firewalls between identity services, energy suppliers, and AI model development to prevent bundled leverage. • Monitor rhetoric drift. Track discrepancies between public promises (e.g., “open” research, low energy use) and measurable practices—treat them as leading risk indicators rather than PR slips. ⸻ Bottom line: Trust is earned through verifiable alignment of words, incentives, and observable behavior. Altman’s track record—board candor issues, privacy pushbacks, IP controversies, and aggressive lobbying—shows persistent gaps between narrative and action. Until independent and enforceable governance closes those gaps, skepticism remains the rational stance.
0 reply
0 recast
0 reaction