Tarun Chitra
@pinged
Wow, thanks for the resounding welcome back! As promised, I have a little to tell you about something that had me sort of offline-ish for the last couple months: I had my first bout of AI Doomer-itis Luckily it was cured by trying to write this paper with AI as my assistant and understanding the promises and flaws
10 replies
19 recasts
161 reactions
Tarun Chitra
@pinged
Philosophy ~~~~~~~~ From ~2015 to late 2024, I was always generally an AI skeptic / 'anti-doomer' — from the perspective that it would never really get that close to replacing most tasks; almost everyone from DESRES [someone asked for lore] ended up in HFT or AI and it came down to a philosophical difference
1 reply
0 recast
13 reactions
Tarun Chitra
@pinged
One reason math/theoretical peeps gravitate more to HFT than AI is that you get to take "comfort" in the fact that most of the math you are using is well justified — you made some model that has a convergence guarantee and the data differed? Ok, I know *why* the model didn't work
1 reply
0 recast
5 reactions
Tarun Chitra
@pinged
This idea that you need epistemological security from the thing you're working on is something that I'd say divides theoretical and applied sciences — in applied sciences, you're often willing to accept something that you can't prove works or exists from first principles in the hopes that later it will be explained
1 reply
1 recast
8 reactions
Tarun Chitra
@pinged
Modern AI (probably from GANs onwards) is a bit of an epistemological quandary: It delivers increasingly superhuman performance yet even the most basic understanding of why the self-attention unit is so much more efficient for text than anything humanity has ever made (with lots of effort!) is non-existent
1 reply
0 recast
5 reactions