nexart
share your nexart.xyz creation here
Arrotu pfp

@arrotu

Shipped a trust upgrade across NexArt: • Canonical node now issues signed attestation receipts (Ed25519) + publishes public keys • @nexart/ai-execution v0.5.0: offline verification of node receipts + reason codes • @nexart/codemode-sdk v1.9.0: same receipt verification for deterministic runs Try it: 1. Generate a certified AI record: https://nexartaiauditor.xyz 2. Verify independently (no account): https://recanon.xyz Proofs are now portable + independently verifiable — no “trust our database”.
0 reply
0 recast
2 reactions

Arrotu pfp

@arrotu

Feels good to see the outreach strategy for nexart.io working and getting the first meet/demo meeting. Not expecting a conversion just yet, but the feeling of getting someone understanding the pressure point in the industry and seeing the value of nexart is already a great +
3 replies
0 recast
8 reactions

Arrotu pfp

@arrotu

gm https://medium.com/@arrotu-nexart/the-missing-proof-layer-in-ai-verifiable-execution-infrastructure-9ec1d1376620
0 reply
0 recast
3 reactions

Arrotu pfp

@arrotu

If you are building agentic flows, retrieval pipelines, or internal decision automation, I would genuinely love to know whether this matches the kind of evidence layer you need. https://x.com/Nexart_io/status/2041469395247993148?s=20
0 reply
0 recast
4 reactions

Arrotu pfp

@arrotu

Most AI tooling stops at execution. Very little helps you prove what actually ran. We just published 2 official NexArt example repos for: • LangChain • n8n The goal is simple: make Certified Execution Records easier to add to real workflows. Not theory. Not just protocol docs. Working examples builders can start from now. https://github.com/artnames
0 reply
0 recast
1 reaction

Arrotu pfp

@arrotu

https://paragraph.com/@artnames/ just wrote another article on Nexart
0 reply
0 recast
2 reactions

Arrotu pfp

@arrotu

We’ve just published @nexart/agent-kit. Why we built it: As more builders move from single LLM calls to agent-style workflows, the integration problem changes. It’s no longer just about recording a model output. It’s about being able to attach verifiable records to: individual tool calls intermediate workflow steps final decisions The lower-level Nexart primitives already exist for that through @nexart/ai-execution and @nexart/signals. What @nexart/agent-kit does is make that path simpler. It is a thin convenience layer for builders who want agent tool calls and final decisions to produce tamper-evident, verifiable execution records with minimal integration work. With it, builders can: wrap a tool call so each invocation can produce its own CER certify the final decision or workflow outcome keep using standard NexArt execution records and verification flows What it is not: not an agent framework not orchestration not planning not memory not a new protocol surface It is just a cleaner way to make agent workflows CER-native. That was the goal. https://www.npmjs.com/package/@nexart/agent-kit
1 reply
2 recasts
5 reactions

Arrotu pfp

@arrotu

https://paragraph.com/@artnames/ai-audit-trails-vs-verifiable-execution AI audit trails vs verifiable execution
0 reply
1 recast
4 reactions

Arrotu pfp

@arrotu

All the elements of Nexart as of today: Core infrastructure Nexart.io docs.nexart.io verify.nexart.io @nexart/codemode-sdk @nexart/ai-execution @nexart/ui-renderer @nexart/cli node.nexart.io Nexart protocol v1.2.0 Apps: Nexart.xyz ( what started this 😍 - creative coding ) Byxcollection.xyz ( creative coding ) Fronterria.xyz ( open world game ) Nexartaiauditor.xyz ( Verifiable AI execution ) Velocity.recanon.xyz ( Financial simulation with certification ) nexartscience.xyz ( Reproducible research ) All those app are being built with the nexart sdk and/or using the nexart node
0 reply
39 recasts
43 reactions

Arrotu pfp

@arrotu

gm The Missing Layer in AI Systems: Verifiable Execution https://nexart.io/blog/verifiable-ai-execution
1 reply
0 recast
3 reactions

Nexart Protocol pfp

@nexart.eth

We formalized the CER protocol semantics and aligned them across the stack: • SDKs • CLI • node • public verification This means verification is no longer just “working”, it now follows a shared protocol contract. We also hardened certificate identity handling so both the original source hash and the resealed public hash resolve correctly in public verification.
0 reply
0 recast
2 reactions

Arrotu pfp

@arrotu

Gm Today we shipped three major updates to NexArt. • CER API for certifying AI executions • n8n integration for workflow automation • upgraded public verifier AI decisions can now produce Certified Execution Records (CERs) that anyone can verify. Create one with: POST /v1/cer/ai/certify Then verify it publicly. Example record: https://verify.nexart.io/e/retest-certify-002
1 reply
2 recasts
6 reactions

Arrotu pfp

@arrotu

NexArt: From Artnames to AI Execution Infrastructure This is not a product update. It’s the path. Phase 1 — Artnames (August 2024) A simple idea: convert names into art. That became Artnames. A tiny team formed. ◼ CORE  ◻ Founder ◼ TEAM  ◻ Developer (departed)  ◻ Marketing (departed) Momentum felt real, briefly. The developer disappeared. The marketer left to run a gaming company. Suddenly it was just me. No coding experience. No marketing background. Just an idea I refused to drop. The structure collapsed. The idea survived. Phase 2 — Rebuilding Alone as NexArt Everything was stripped back to basics. NexArt.xyz: a geometric canvas I could realistically build. Simple. Achievable. As coding skills improved, the engine evolved — modes, deterministic rendering, eventually Code Mode. ◼ CORE  ◻ Founder ◼ PRODUCT  ◻ NexArt.xyz Solo. Focused. Survival. Phase 3 — First Ecosystem Layer NexArt matured beyond a single site. The question became: could others build on this engine? That led to the NexArt Foundation and the Nexa Token. ◼ CORE  ◻ Founder ◼ ECOSYSTEM  ◻ NexArt.xyz  ◻ NexArt Foundation  ◻ Nexa Token Stable. Functional. It could have stopped there. Phase 4 — Choosing Infrastructure A new mode was needed: byX. Two paths: • Build it inside NexArt.xyz (easier) • Extract the engine as a standalone SDK (harder) Infrastructure was chosen. byXcollection was built independently using the SDK. NexArt split into two parallel tracks: an independent engine, and the products built on top of it. ◼ CORE  ◻ Founder   ▼ ◼ INFRASTRUCTURE  ◻ CodeMode SDK ◼ ECOSYSTEM  ◻ NexArt.xyz  ◻ byXcollection  ◻ NexArt Foundation  ◻ Nexa Token That structural decision changed everything. Phase 5 — Infrastructure for the AI Era To make the art engine work, the system had to be fully deterministic and replayable. As the SDK matured, a realization emerged: The execution discipline required for verifiable generative art was the same discipline missing in enterprise AI systems. AI is moving rapidly into finance, insurance, trading, and agentic workflows. But most systems share a structural weakness. They cannot: • Replay decisions deterministically • Certify exact configurations • Survive audits with cryptographic proof The execution engine built to verify art turns out to be exactly what enterprises need to verify AI. The infrastructure expanded accordingly. NexArt Today (2026) Clear separation: infrastructure is modular and independent; the ecosystem operates on top without entanglement. ◼ INFRASTRUCTURE (Core Execution Layer; Growing)  ◻ CodeMode SDK  ◻ UI Renderer SDK  ◻ AI Execution SDK  ◻ Canonical Node  ◻ Protocol  ◻ NexArt.io (Verification Platform)  ◻ Protocol Demos ◼ ECOSYSTEM (Built on Top; Stable)  ◻ NexArt.xyz  ◻ byXcollection  ◻ Frontierra  ◻ NexArt Foundation  ◻ Nexa Token The Evolution Artnames → personal art engine → SDK → infrastructure → verifiable AI execution framework. This was not a pivot. Each limitation forced a structural choice. Being alone forced clarity. Clarity forced architecture. Architecture created infrastructure. The Next Phase AI is now embedded everywhere, and entering regulated domains. Soon, three requirements will become non-negotiable: • Determinism • Replayability • Certification NexArt is built for exactly that layer. Not another model. Not another interface. Execution infrastructure beneath AI systems. Vision To become the trusted, invisible layer for execution integrity. Any decision, human or machine, captured, version-pinned, replayed, verified, certified. Across industries. Across jurisdictions. Across time. Good infrastructure disappears. It simply works, reliably. What started as turning names into art has evolved into infrastructure for trustworthy AI in a regulated world. If this resonates, whether you’re a builder needing deterministic execution, an investor focused on verifiable AI infrastructure, or a researcher exploring auditability, let’s talk.
1 reply
1 recast
5 reactions

Arrotu pfp

@arrotu

https://paragraph.com/@artnames/verifiable-ai-decisions-need-a-standard-record-introducing-aief
2 replies
51 recasts
69 reactions

Arrotu pfp

@arrotu

What if every AI decision could be independently verified? Here’s a 40s demo built on @replit: LLM response → policy engine → CER → canonical attestation. No black box. Try it: nexartaiauditor.xyz (Mock LLM included, or plug OpenAI/Claude + NexArt API)
1 reply
5 recasts
12 reactions