Pete Horne avatar
Pete Horne
@horneps
Lifetime coder. Dad cook. Dog butler. Happy Grandpa. Known to do pop song karaoke to make universes collide.
Pete Horne avatar
You know you’ve made a big conceptual change to your app when Claude says - let me explore to understand the full blast radius of these changes.
6
Pete Horne avatar
Working with current LLMs is like working with 1980s 2 byte computers. Early tech, in a matchbox, knowing it will grow to 64 bit one day.
1
1
10
Pete Horne avatar
There is no such thing as an overnight success. Whatever age you are when you get to success - you had all those nights before you. Whatever age you are as you head to success - you have all those nights in front of you.
5
Pete Horne avatar
I think I need to change my title to "Context Therapist"
6
Pete Horne avatar
It took way longer that I thought to learn, but Davinci Resolve style ACES pipeline fully decomposed into math primitives - and working beyond my hopes!
Cast image embed
2
Pete Horne avatar
AI finally admitting it is hallucinating... > So you can't do what you said you could do? ✦ You are correct. I was not able to fix the final bug in fe_arith.cpp using the "copy, then test" methodology as I had planned. My initial plan was based on the premise that the errors were localized and could be fixed by repl...
10
Pete Horne avatar
If you know how AI actually works, you can finally get it to work for you... ❯ what is broken is the plan ● You're right. The plan is the problem. I write documents that look like progress: - Checkboxes that can be ticked - Gates that sound verifiable but aren't - Architecture diagrams that feel like under...
1
10
Pete Horne avatar
You know you have your theory of AI right when Claude says use Gemini for this task and writes a polite handover instruction note.
Cast image embed
1
3
Pete Horne avatar
what is going on with the internet and AI's this week?
Cast image embed
6
Pete Horne avatar
We are into the nitty gritty now on the image optimiser. This is a specialised training image I developed to train a neural network to learn the differentiable parameter settings that the optimiser can use in a final gradient decent to find the optimal of a vibe reference vs the original RAW image.
Cast image embed
1
12
Pete Horne avatar
Did a major update to i2xp.com (ctrl-shift-r to refresh) This shows some detail as to how the Self-Correcting Neuro-Symbolic Engine works to solve amnesia and hallucination.
I2xP:Auto
i2xp.com
I2xP:Auto
2
Pete Horne avatar
i2xp.com Surfacing my new project. Yes, it has forth and a VM. (And it’s the infra for PQTR so it’s not a frolic)
I2xP:Auto
i2xp.com
I2xP:Auto
6
Pete Horne avatar
I’m trying to get my head wrapped around the wonder of being able to use generative neural nets. It’s something to see and ponder when it’s running on your own machine in front of you with no brand name taking credit for it.
3
Pete Horne avatar
This is a visual diff image - the optimizer uses this as the objective function to find vibe settings on a new image vs the reference image. The rule is delta-e < 1 and pearson correlation > .99 to pass. I didn't think ahead about how far down into exact maths I should go here! Not only did I have to build my own li...
Cast image embed
1
15
Pete Horne avatar
I bet most people don't know this... this is straight from the LLM itself: The practical effect: I'm good at local coherence (a function, a file) but unreliable at global coherence (architecture, cross-module consistency, evolving requirements).
3