devFC
devFC has two parts a private community and public channels: Subscribers can join the private community, while all educational resources once finished as well as some discussions happen in this open channel. https://dtech.vision
Samuel ツ pfp

@samuellhuber.eth

I wonder why we haven't seen fully AI created companies yet. Especially SaaS with Email being customer feedback should be able to fully run on a stack of AI developers implementing features, AI code reviewers, approving/rejecting and looping the engineers until it matches. PM's requesting new features, doing market research and doing customer support. All that data feeding into the models and priorities to drive revenue and customer happiness. I'd bet we're at the inflection point 2026 where these are doable.
8 replies
1 recast
31 reactions

Samuel ツ pfp

@samuellhuber.eth

How you don't have to trust bluetooth but can make it secure enough for Wallets https://blog.bitbox.swiss/en/whisper-how-the-secure-bluetooth-integration-of-the-bitbox02-nova-works/
1 reply
1 recast
8 reactions

Samuel ツ pfp

@samuellhuber.eth

On Code Review by simonwillison.net in his Newsletter. Ship LLM generated code, but prove it works: How to prove it works There are two steps to proving a piece of code works. Neither is optional. The first is manual testing. If you haven’t seen the code do the right thing yourself, that code doesn’t work. If it does turn out to work, that’s honestly just pure chance. Manual testing skills are genuine skills that you need to develop. You need to be able to get the system into an initial state that demonstrates your change, then exercise the change, then check and demonstrate that it has the desired effect. If possible I like to reduce these steps to a sequence of terminal commands which I can paste, along with their output, into a comment in the code review. Here’s a recent example. Some changes are harder to demonstrate. It’s still your job to demonstrate them! Record a screen capture video and add that to the PR. Show your reviewers that the change you made actually works. Once you’ve tested the happy path where everything works you can start trying the edge cases. Manual testing is a skill, and finding the things that break is the next level of that skill that helps define a senior engineer. The second step in proving a change works is automated testing. This is so much easier now that we have LLM tooling, which means there’s no excuse at all for skipping this step. Your contribution should bundle the change with an automated test that proves the change works. That test should fail if you revert the implementation. The process for writing a test mirrors that of manual testing: get the system into an initial known state, exercise the change, assert that it worked correctly. Integrating a test harness to productively facilitate this is another key skill worth investing in. Don’t be tempted to skip the manual test because you think the automated test has you covered already! Almost every time I’ve done this myself I’ve quickly regretted it. Make your coding agent prove it first The most important trend in LLMs in 2025 has been the explosive growth of coding agents - tools like Claude Code and Codex CLI that can actively execute the code they are working on to check that it works and further iterate on any problems. To master these tools you need to learn how to get them to prove their changes work as well. This looks exactly the same as the process I described above: they need to be able to manually test their changes as they work, and they need to be able to build automated tests that guarantee the change will continue to work in the future. Since they’re robots, automated tests and manual tests are effectively the same thing. They do feel a little different though. When I’m working on CLI tools I’ll usually teach Claude Code how to run them itself so it can do one-off tests, even though the eventual automated tests will use a system like Click’s CLIRunner. When working on CSS changes I’ll often encourage my coding agent to take screenshots when it needs to check if the change it made had the desired effect. The good news about automated tests is that coding agents need very little encouragement to write them. If your project has tests already most agents will extend that test suite without you even telling them to do so. They’ll also reuse patterns from existing tests, so keeping your test code well organized and populated with patterns you like is a great way to help your agent build testing code to your taste. Developing good taste in testing code is another of those skills that differentiates a senior engineer.
0 reply
0 recast
6 reactions

Samuel ツ pfp

@samuellhuber.eth

Some of the best viral mini apps come from @jc4p I went through all of his mini apps with a set framework and here's the output analysis (AI) (full detailed 38 app analysis public on the dTech Github) https://github.com/dtechvision/miniapp-analysis
5 replies
7 recasts
46 reactions

Samuel ツ pfp

@samuellhuber.eth

Doing some really really cool technical exploration of deterministic storage solutions. Yes no unexpected errors. why on Twitter/X? because that's where the maintainers of these tools are. https://x.com/samuellhuber/status/2000848958713725206?s=20
2 replies
2 recasts
52 reactions

Samuel ツ pfp

@samuellhuber.eth

amp (coding agent) cli has improved very much! it feels, snappy very fast and seems to do more work faster then codex & claude code. glad @mcbain had me try it months ago. going to give it another shot
4 replies
1 recast
10 reactions

Samuel ツ pfp

@samuellhuber.eth

kind of realizing that any template/boilerplate that isn't merely an inspirational codebase needs 1) Error Handling 2) Error Logging/Tracking e.g. sentry 3) Analytics e.g. Posthog 4) Logging -> e.g. Grafana Loki 5) Traces, Spans, Metrics -> e.g. Grafana & Prometheus or just Grafana Stack 6) Good local dev environment -> Docker-Compose with all the above preconfigured 7) extensive testsuite to verify everything!!! @effect/vitest + vitest visual regression seems interesting 8) heavy linting and formatting with precommit hooks so that standard is upheld. 9) CI to verify every single PR with tests, type checks, build, e2e tests 10) CD to ensure code is what runs in prod 11) AI verification and review 12) a checkmark for humans to have tested the preview branch (hopefully not needed as much as AI improves) if you haven't thought of all these you will as soon as you hit users or you/clients ask "where is this error is coming form", "how many users do we have", "why is this so slow", "how can we make this faster", "is this actually running" ツ
3 replies
1 recast
16 reactions

Samuel ツ pfp

@samuellhuber.eth

are there good local coding llms? gpt-oss-codex??? I want to run a husky hook that reviews semantix on top of linting The llm should check that the tests make sense and things like that. Using an LLM here means we can go beyond static linters for quality control cc @shoni.eth @jtgi are you doing something like this?
5 replies
0 recast
16 reactions

Samuel ツ pfp

@samuellhuber.eth

added Effect Vitest instructions based on bun not pnpm 😄 So if you use bun like I do you get copy paste instructions to init and have AI write tests for you https://github.com/dtechvision/great-repo-files/blob/master/effect-vitest-bun.md
0 reply
0 recast
9 reactions

Samuel ツ pfp

@samuellhuber.eth

incredible tool and devx https://www.effect.solutions/ https://x.com/davis7/status/1994943860569506051?s=20
2 replies
0 recast
7 reactions

Samuel ツ pfp

@samuellhuber.eth

Effect truely is write once, use everywhere. I was wondering if I can use any kind of analytics / monitoring backend. Sure I can. OTEL, Sentry, whatever. just use effect and provide implementations
2 replies
0 recast
13 reactions

Samuel ツ pfp

@samuellhuber.eth

Valtown using Effect @artivilla.eth https://x.com/kitlangton/status/1994100279826858399
0 reply
0 recast
1 reaction

Samuel ツ pfp

@samuellhuber.eth

Super cool repo
1 reply
0 recast
5 reactions

Samuel ツ pfp

@samuellhuber.eth

if you only want to install and use @bunjavascript but need utils to be able to lookup npm and node use this trick. Just worked for me to run installers that want to call npm subroutines
1 reply
0 recast
3 reactions

Samuel ツ pfp

@samuellhuber.eth

This is so good! https://effect.kitlangton.com @rafi and yes @statuette it's effect visualized ツ
3 replies
2 recasts
16 reactions