I know so much more about myself than an LLM ever could

  • Literally, by definition, an LLM will never know more about me than I know about myself. It couldn’t possibly. I’ve experienced the territory, my entire life. It interacts with me through the map, through scraps, and mistakes the map for the territory.

  • And yet, I fall for the hype, and believe that “holy shit, it knows more about me than I do, because I can’t remember/think back/grok myself, it’s impossible”

  • But actually, I can think about my own life, probe my own lived experience. I will have far more context, and actual embodied signal of truth vs untruth. Whereas the LLM is using a profoundly shallow and incomplete map, and literally cannot tell the difference between truth and untruth, as it is not embodied, does not experience emotion, the “somatic consonance” thing from consensus-ism

    • Like, my consensus-ism model is dreadful. It’s made by someone who never learned logic. So it’ll be an absolute patchwork mess. Most of the chunks aren’t connected together at all. It’s not a model, it’s a collection of (mostly someone else’s) ideas, filtered through someone who wasn’t good at noticing what didn’t make sense. “I guess I’m just not getting it yet, but I’ll publish it” vs “I guess my understanding of it is very poor, and I need to keep refining”.
  • I spent a few hours today investigating my first principles (or so I thought)

  • I had Claude read this entire website and give me it’s ideas. And was like “sick, this is so much better than what I could have come up with!”, because it had my whole website, and took its answers seriously

  • DUDE, this is so incredibly stupid! - I have 29.5 years of personal context loaded up in my system!!! This thing has 9 months of periodic writings, from when I was 29, and exploring very specific things!!!

  • So this is what it gave me, based on my website, and some feedback I’ve gotten:

  1. Walk into chaos. Make it legible. (Seek undefined spaces. Leave when the structure is built.)
  2. Perform. Make people laugh. Be visible. (This isn’t vanity. Honour the Leo.)
  3. Go deep with the few. Release the rest. (Message bankruptcy is not a moral failing.)
  4. Follow the energy. Not the “should.” (If you’re dreading it, that’s your answer.)

LLMs, maps and territories

  • I’ve just realised that: an LLM mistakes the map for the territory in an enormous way, because the map is all it has. So it thinks it knows me thoroughly, it thinks it has a complete view of me. But only because it literally cannot fathom all the information that it has missed, all the things that have happened to me, that I have experienced. How many terrabytes would that be? An unfathomably large number. I wonder how many gigabytes Claude has about me, based on my writings. Information theory could probably tell me. Every embodied moment of my 29.5 years, or a github repo of writing

  • E.g. (tone change because this was written earlier in the process) - I think something like “make it legible” is a shallow label of a small part of the deeper thing, a very LLM overindexing on the tiny amount of context that it got (a small amount of writing from me)

    • Imagine if I had been alive and developing for 29.5 years and one of my 4 first principles, core values, 25% of my essence, was “walk into chaos. Make it legible”. That is the most ridiculous shit. It’s so incredibly imprecise, and based on such little data.
  • I think my real core values or drives will be things where the seeds were planted when I was a child. I mean, it has to be that way, these things didn’t just spring out of me at a relatively late age, they’re labels given to me a “More Knowledgable Other” noticing me doing something that I’ve been doing from a place of intrinsic motivation, transparent motivation, for years and years, since I was a kid, probably

Sasha Chapin, from Chanda

We also tend towards poor intuitions about chanda. Perhaps we say to ourselves: “my intuition is telling me that I need to become a CEO to be happy.” But it would be weird if that were the case, because the intuition mechanisms in your mind are much older than job titles. What’s likelier is that there are ==certain configurations of experience that will make you happy==. Like “leading a group of people,” or “slowly turning something over in your mind,” or “transmuting reality into an artistic representation.”

Internal data vs external data (it has to be weighed against internal!)

  • But because I was dumb, I mistook the map from the territory. I said “[person I respect] said I was x! I guess I’m x!“.
    • Rather than “[person I respect] said I was the label x. I think this points to me being y
    • Like, my self concept is made up of imprecise compliments from others, complimenting the tiny part of me that they see. (It’s like Good Old Neon)
    • Rather than, my self concept could be made up of labels that I find for labelling the vast life that I have experienced (in a phenomenalogical way)
    • This is a key insight for “looking inside, rather than looking outside” (within rather than without, is a phrase in my head). You have to look inside to know what is true, tastes true. You have to use your emotions, your valence, it’s literally all you have (too far?)
    • To devalue your internal experience because whatever, you’re not as trained or educated or others, so of course you can’t trust you own taste
    • Whereas my sense of taste is incredibly important, one of the core things I have. “Like” or “don’t like”, always available, never misleading. Pure.

Instead of trusting the LLM’s suggested first principles

  • So, if instead I elucidate the core things (not the very late-to-the-game-and-incomplete labels), I can avoid living by “first principles” that are actually very shallow and imprecise things. And I can instead have a much clearer view of what I really enjoy and desire, and more precisely pick actions to keep me doing those things, rather than things that seek to hit a vague and imprecise proxy value of a part of that thing.

So, I’m looking within to make my own

Appendix - what can you use LLMs for?

  • They’re still great for e.g. getting prompts, questions to ask yourself, and summaries of fields, ideas, etc
  • If you’re treating it as wiser than you, about yourself, then you’re dumb
  • But if you’re using it to gather external stuff to test internally, then that’s ok
  • Just never substitute its external suggestions for internally-sense-checked stuff