@daikie
Someone needs to setup a polymarket that resolves "yes" if a LLM inference provider gets caught issuing ghost tokens where the model has a config to identifiy lackluster user profiles to then deliberately compute slop under the hood for set users that appears as productive code but has intentional hickup written in the syntax for the sake of driving up the token bill for fiduciaire stake holder reasons, in function making the business model selling the user the illusion of productivity rather then actual job replacing ai.