# ZERO All symbolic content, words, images, videos, code, will likely double by the end of the year. In fact, "doubling" is not even the right concept, it is like saying π's digits are going to double. The stream of symbols is going to be infinite. It already is. Who can index this? Who can search this? What does it even mean to `search` it? Why would you search tokens? The web is dead. At least as the "information superhighway", that probably never was, but people hoped it would be. What is coming is a lens, a language model lens that allows you to browse the stream of infinite tokens. Be it ChatGPT, Anthropic, Google or Perplexity, is not important. I see tokens on hackernews, on reddit, on x, on linkedin. Its not even the end of the year yet. I honestly don't give a fuck, the sooner the modern web dies the better. At the moment it costs like 5k$ to Setup local wikipedia with relatively big gemma and just disconnect from this fucking swamp. So, I suggest you look for some cheap 3060s and nvmes and you also say: fuck this shit. --- You can get the english only wikipedia zim from kiwix: https://download.kiwix.org/zim/wikipedia/ Use llamacpp with nomic text embed to get good embeddings and facebook's faiss to build an index and search them. It will surprise you how small it is. If you dont how to do those things just ask claude to write you the programs. It is almost good enough to do it in one prompt.