Transcript of Attractors in the Space of Mind Architectures and the Super Cooperation Cluster, a 45min conversation between Aneesh Mulye and Romeo Stevens.
This version doesn’t disambiguate the speakers. Maybe I’ll make a cleaned-up one later
To begin with. I wanted to get a clearer idea of the context of both of these.
Okay, so specifically, specifically the complexity prior and the Yeah, homeostatic prior, no,
the complexity prior and the super cooperation cluster. So it turns out that you said humanity is grappling with the Darwin meme. Yes, and they seem to be the only two things that even remotely, I don’t know, but the why by what was that seemed to have the heft to deal with it. Okay,
yeah. So what, the way I put it is that I see most people as doing one of two things. They’re either sort of like reaching up from the bottom, like, so it’s like a, if you start off with a materialist, reductionist sort of stance, then you’re kind of trying to recreate all of our meaning structures. You’re trying to, like, bridge that is ought gap. It’s like, okay, we have all this stuff that is going on. And then we’re trying to start from the is, yeah, they start from the is, they try to get to the odd. When people start from the ought, sort of ex Nilo, right? They come up with a lot of wacky stuff that’s just, you know, I mean, there’s two things it’s predicated on. One is just whatever humans have a tendency to believe. The other is like, what humans in the past have believed, right? Like whatever people have come up with that’s survived over centuries, right? Religion, various religions and other moral frameworks that someone people read it and they’re like, Oh, that’s pretty clever. That seems consistent, or it seems like I’m able to apply that in a sort of fuzzy heuristic way, right? Utilitarianism, deontology, virtue ethics, whatever it is, stoicism, you know, particular schools of thought. It’s like this is a basic framework for for acting. They
pick something that seems given that life experience and intuitions to be
workable, pragmatic, yeah, and then, or even resonant, yeah, and people get frustrated with the sort of that these things are under specified or not ultimately grounded, right? You have that they’re in lunch house and trilemma, right? Either infinite regress, regress, or unjustified axioms or a circular, circular justification. So I’m attempting to do something like the man chosen triumph
of grounding an art that is purely an art in a sense,
well, so I it is. It’s informed by, I am trying to like do down from the ought, instead of up from the is. But it’s informed by sort of a descriptivist, a descriptivist agenda of like, first, like, see what is in terms of, oh, humans have these behaviors around meaning structures, and some of them seem to operate well, like people feel like their actions have meaning, and they feel like they are oriented towards like, useful levers in their world. And other people, don’t they are. They’re handed one, but they find it insufficient, and then they, you know, abandon it, and then they cast around for replacements. They don’t have good metaverstics For like, how do I select an ethical system? That’s why, you know, often meta ethics is the term that gets thrown around so where
people have issues with the concept of given their art sense, often they have a problem with the concept of the fact owning the fact that there is, in fact, a selection of ethical systems that they are doing, because if you do that, you lose the internalistic aspect of things. Meaning, if Yeah, so they can admit they have to frame things in terms of, well, you see, I thought that was correct, but it turns out that actually, as opposed to just, no, I’m doing a search procedure,
right? Yeah, and that’s probably the for secular people. The ancestral environment and the Darwin meme sort of serve that purpose, but very poorly, right? And so they wind up with nihilism. But yeah, I would agree it’s something more
pernicious than nihilism. Actually. It’s not that things don’t mean anything. It’s more that everything you know and love, its actual meaning is, in fact, horrible, not anything good.
Well, it’s just, it’s all. It all reduces to impersonal forces, yeah, which is not commit doesn’t resonate with the internal experience. And so ultimately, it feels like it’s an unsatisfying explanation. And obviously we have so we have Mars levels of distraction which tell you why this isn’t a satisfying explanation, right? So a person comes to you and asks for an explanation on the intentional level of, how should we orient towards the world? And you give them an algorithmic or a physics based response of like, well, you know, it’s all just mechanistic systems. It doesn’t, it’s not you didn’t actually answer them on the level of intention,
right? I mean, I would go one step further, as I said, like this more than just like, it is a sort of of all the possible overlays, right, that you could give on your sensory experience. The darginian overlay is a particularly cynical one, because, as it says, It confuses intention with the causal mechanism that led to the beings having intention in that particular way. But humans are not good at maintaining this distinction, and so they end up thinking, Well, Bob is doing this to maximize his fitness. And so. It makes a wide variety of human experience just straight up unavailable, like I am loved is cognitively impossible to believe. If you think it’s just other people pretending to love you, because that will maximize their fitness, even if they are actually just Gen they they genuinely give a shit, right?
They’re having an internal experience that is totally in line with a more intuitive sense of, like, oh yeah, there’s this felt sense to caring about people and being cared for, right? That feels lost, yeah, right.
And if all the all the time, you ascribe the meaning to that of this Machiavellian person is, you know, doing this, like, Okay, this is one, it’s not true. Two, I you’re making yourself miserable. And three, this is an extremely difficult trap to get out of, because you can always point to well, but the causal process like, it’s more like a salience trap than a than a actual, you know, content trap,
yeah, and I think this is part of why the Azathoth meme, or just sort of reifying, anthropomorphizing impersonal forces, is an effective, or not ineffective, but it’s at least a partial antidote, right? So if I posit the existence of Azathoth, I can sort of gesture at, well, yeah, Azathoth is Machiavellian, but you know, my goals are not necessarily the same as Azathoth, even though I’m running on hardware that was optimized by Azathoth. Azathoth being the personification of evolution, right?
I mean, to be entirely honest, choosing Lovecraft was not a neutral decision for this pantheon, because the reason he was picked is because of the sense of horror people felt. And I don’t think it’s a good idea to pick as a thought, because it fundamentally just fixes in place the emotional tone of horror as the dominant, as opposed to alienness as the dominant, because alienness brings with it the possibility of wonder. Horror does not it fixes the tone in place. It’s a good point, interesting. So to summarize, so just so that I’ve got it correctly. People start with the IS and end up in all kinds of icky places because the space of the IS is fast. People start with an art and find it unsatisfying, either because conditions have changed tremendously since most of these arts developed, or because the Darwin meme renders a lot of these arts not workable. There’s eternalism, the problem of not being able to just acknowledge directly that, yes, you’re looking for an ethical system, because your current one is unsatisfactory. But I think I’m missing some core component. What would you say is the core piece of the problem with the starting with art, like you have with these.
So I view a lot of descriptively, if we’re just going out looking up humans are doing when they do sort of moralizing. What I see is an attempt to systematize the moral intuitions, right in the hopes that So, what do we accomplish when we systematize? Well, you render sort of predictive and extensible and modular. It’s like, okay, I’m going to be able to figure out, using some sort of rule based, simple rule based system, how to apply this system, and also to apply it to novel situations and maybe create some like, coordination points, because, like, if it’s systematized, we can, like, more easily communicate it, talk about it, all, coordinate about around it. Like, you know, different interpretations of law, right? Are sort of like philosophical arguments rendered into these Byzantine sort of, let’s pretend that the
bureaucracy is sufficient, yeah, in such a way that it already encodes all our shared understandings. So all fights over, so that all our fights about deep theology translate into the minutia of procedural rules. No way that could possibly lead to problems.
Yeah? So, so, okay, so we have this agenda of we want to be able to systematize our moral intuitions. But like, where do you start? Is the question, right? So the core, one of the core ideas, well, let’s see, this is a different there’s a couple different slices on this. One is the navigational schema. One is the super Corporation cluster itself, and the other is maybe like the homeostatic and complexity priors, and they all, they all relate ultimately. But
okay, so to answer so this, to set the context, is the odds try from a top down view doesn’t work for various reasons, right? The easiest start from a bottom up view also runs into very weird problems. What are the problems that is people run into? And why is Darwin relevant? Like that seems to be the crux here, right? Why does that make both of these approaches unworkable, whereas they seemed kind as sort of workable at a small scale before they’re only
workable as long as you don’t examine their foundations, okay?
The reason the art, or both, both.
So, So traditionally, so this is, this is actually a good additional context. Traditionally, the is ought problem is seen as a bridging problem. Okay, you have the is and you have the ought, and you need to bridge between them. This is just, you know, if you examine experience, this is obviously false. You have direct access to neither. The is. Ignore the ought, right? So you have indirect realism. You don’t actually have direct access to how things are. You have, you’ve never seen an electron, right? So what you have is heuristics that return is like answers, like, oh, like physical laws, consistency invariances, you know, etc, etc. And then you have other heuristics that return ought, like answers, they return. Like, oh, the interesting, outrageous, yeah, these are moral intuitions. Is those this group’s moral intuitions, how the game theory of how these people interact? Um, how do we feel about laws, etc, etc. So you are always this heuristic bridge, and one end sort of reaches towards, is one end sort of reaches towards, ought, and you are already solving the is off problem on a daily basis. When you get up to go eat an apple instead of an orange, you’re solving the is off problem using just fuzzy heuristics.
There is an apple. Should there be an apple? Yes, it should be in my stomach. Okay, so then why is there a gap like this in the first place. I know this takes us slightly afield, but why? Like, what necessitated the heuristic bridge? Because
there are, there are things that are the case that you don’t have direct control over, and you have to decide how to act in relation to those.
Yes, but why is there such a thirst for systematization in the sense that we do not find, or at least, my suspicion is we don’t find these kind of same problems in genuinely small tribe human societies, like there is no is odd gap the
I disagree. So the pack that evolution initially found was animism, right? Treat the physical world as if it’s intentional, as if the ISs are all odds, right? And then you get things like naturalistic fallacy, etc, etc. But this is a fairly consistent like it can be made consistent by just telling stories that the stories have enough degrees of freedom that you can sort of fit stories to various things. Well, the clouds rain because they want to, right? Because there’s Oh, then you have extended stories. And whoever is the best storyteller, you know that, and what counts as best, right? Well, the version that best compresses the sensory data of people’s actual experience, okay?
And that’s not workable in modern times, merely because of the fact that the complexity is not easily fitable into a narrative framework,
yeah, and also the amount of it’s so the complexity on the complexity on the is side, but then also complexity on the outside, because there’s so many, so many intentions in the world all clashing at the same time, at all different levels of abstraction, and no one’s really keeping track. And the way humans work is, you know, there’s we’re not forced to stick to a particular level of abstraction when we engage in our sort of mimetic warfare, right? Medic warfare, it probably has some
verbal mimetic, mimetic, or
both, both. But, yeah, so one, one question that we can ask is, so we know sort of where is comes from, like, here at all is, right? There’s rocks you can throw them at things. Other things happen as a result of what those rocks hit. So where do moral intuitions come from? Right? And so one answer for what moral intuitions, where they come from, is sort of just basic, a basic homeostatic system can be said to have odds that it’s instantiating on the environment, right? So bacteria that can only survive between two temperatures and has a temperature sensor, and the evolution of that bacteria is constrained by, well, bacteria that don’t that go outside of this temperature gradient die. So any
homeostatic system has an auto overlay that it can impose,
okay, you can interpret it as sort of imposing an ought on its okay. So
any homeostatic system can be thought of as imposing an art, both on its environment and presumably it’s its response to those environmental
Yeah, and so we do have a question. We have we have a research agenda in the sense of, it’s not known. How do more complex goals in a, say, a nervous system of a simple bacteria or up to a mammal? How do, how does it build up higher order goals out of the very low level goals? And we seem to be creatures that are able to, we sort of do a type of like speculative extension, where we’re able to just define our own intermediate goals. We’re like, well, what if I tried maximizing this thing for a while this measure, I’ll invent my own measure. Go off and try to, like, improve it and see if does that well, does that like meet my underlying needs well or no? And when we sort of shop for different value systems, when we see people when we see people choosing different strategies, we sort of set up these intermediate goals. It’s like, it’s like you’re generating a new thermometer based out of all you have this whole bank of thermometers, like temperature sensors, pressure sensors, various homeless and then your own internal sensors for how well your organism is handling things, how hungry you are, blah, blah, blah, your appetites. So all these sensors, and we seem to construct like speculative sort of measures, right? What if I tried doing this thing that increases this and then see if that sort of meets my needs,
right? Oh, so we can experiment a bacteria cannot experiment, but we can just poke at stuff and say, right, what if I tried to just, I. Mean, this is unlikely there were diet fats in the Paleolithic. But what if some paleo is like, Yeah, I’m gonna go on a carnivore diet because I’m really, really rich. Let’s see what happens.
So, so, yeah, the homeostatic prior. You could see it as sort of extending. You can see the sort of you’re attempting to try to see how you could live in, like, a higher, variance under under higher variance conditions, like a broader range of environments. Ah. So
what you’re saying is that we are not constrained by, well, except that something we are not constrained by, but that the constraints allow us a far greater degrees of, far more degrees of freedom in what we well,
the bacteria can’t figure out to try putting on a fur coat to be able to survive in a wider range of environments. Exactly. Yeah. But we can, and then some of those creatures survive. And now there’s selection pressure on expanding your homeostatic range, right? So I think there’s something, something around the homeostatic complexity prior there of like, I see, yeah, okay, although the complexity prior covers a lot more than that. So the
idea is that if you can survive both in the Arctic and in the desert and in Saudi Arabia or something, and one hopes, also in the intermediate climate, because it’d be really weird if you couldn’t, then, in some sense, you are more robust as a race than, you know, some hominid that,
right, right? So this is all still sort of reaching up from the is, this is all scaffolding, right?
Homeostatic is, is very clear. Yeah, exactly, exactly. Well, it’s sort of how, how
nature answers the question of, of there being an awe, or, like, why there, there would be an is ought in the first place. So what I’m doing is with the Super cooperation cluster idea is,
what’s the conflict? However, here, like the conflict that you’re trying to solve, that the fact is that these heuristics no longer work, or that the grounding process doesn’t work, or like, what exactly, if you had to characterize it, is the issue here where the is, people cannot build up to a consistent and proper art, and the Art cannot, people cannot ground out in a thorough is
sure, yeah, exactly. So from is to ought, there’s nihilism. And from ought to is, there’s like crazy religious nonsense that doesn’t it. It’s not a causal fact. You can imagine, a healthy religion like Buddhism probably comes the closest in terms of, like, it’s fairly compatible with science the fair, you know, the Buddhist leaders, blah, blah, blah. Everyone’s sort of already familiar with that. But in practice, most religions seem to calcify around particular worldviews, and they don’t seem to be very good at inculcating sort of adaptive strategies to their to their flock. Unfortunately. Yeah. So I can, I can also, so then I’ll try to cut in from another angle and see if they, see if they converge somewhat.
So, so what is the then positive goal a healthy religion? One might say
that both going both ways is fine, that people can go up and down the stack, no problem. And it’s fine, yeah, and that people have some some self awareness of what they’re doing when they move up and down the stack, and that explanations of the world at these multiple levels are in concordance with one another, instead of dissonant with one another, and
also in concordance with, I hope, what leads to a happy and virtuous or healthy human life, right? Well,
that’s, that’s the art.
Okay, so, okay, what’s your is here, and what’s your art here? Let’s
All right, so I’m getting to it is, the is, the sort of is, the is, the odd. But let’s, let’s, so let’s, let’s back up and try to, try to, like, explain super hard, otherwise
you wouldn’t be doing anything. Yeah? Exactly. There is an is that you think ought to be different, yeah? So,
okay, so in game theory, you have this concept of third party punishment, right? And so for people that don’t know, third party punishment means that, okay, you have so people cooperating. You have cooperate, cooperate sort of equilibria. You have defect, defect equilibria. You have defectors sort of trying to free ride on cooperation norm. You have did for that equilibria, yeah, yeah. And so third party punishment means that there are people who are observing the interactions that are happening, and they are noticing when defectors defect, and then they’re telling everyone, or getting everyone, or punishing or doing something to cause people to defect against the defectors. They’re like, okay, it’s not enough to just we all cooperate with each other. We also have to make sure we defect against the defectors, so that defection itself is a losing strategy, and the defectors will be incentivized to switch over to a cooperate, cooperate strategy. And so there’s a lot of there’s been a lot of research on what causes what causes this? How do you get stable cooperate equilibria and what can tend to undermine them? Can you answer that those and this already happens. You know, in society, obviously this is happening all the
time. There’s a beautiful example I remember. I remember some Atlantic article or some other article which talked about how very often, you know, bureaucracy is going from corrupt to non corrupt, or non corrupt to corrupt. Can sometimes happen in what seems like relatively quick phase shifts. And I think this might explain it. I. Like, if you have even this, this effective third party punishment for corruption, then I think once the probability of that goes above a certain threshold, it just completely flips the calculus. And so within a few years, it’s like, boom, even hitherto corrupt people just become non corrupt, because now it’s in the center to start to
and the phase shift has actually been super well characterized in in simulation, at least. So they found that if a small portion of the network is super cooperators, then the network can flip over from a defect effect equilibrium to appropriate
one. So now this brings this seems to give me some intuition for what a super cooperator is. Brilliant. Tell me what it is,
yeah. So the surprising thing is, is that it goes both ways. They discovered empirically, in actual tests with people, that there’s also super defectors that will punish cooperators. And there’s been a bunch of, there’s bunch of been a bunch of follow up research to try to figure out why this would be this seems very counterintuitive. You’re
saying that in simulation anyway, they have found that super defectors can, in fact, create super defect, a defect, defect equilibrium, and enforce it. Yeah, yeah, even if they have to punish cooperators at some small cost to themselves, right? Okay, and do real people? Do real human beings with this? Yes, that was
the surprising thing. In what context, in the context of study. So it’s public goods games iterated. It’s sort of like an iterated games in which people can choose to contribute some proportion of a chunk of money in each round, and if enough people contribute, there’s some sort of multiplier. And so a few enough people contribute, then it winds up net negative for each individual person, because it gets distributed. Person, because it gets distributed so widely that you don’t gain. But if enough people contribute, then everyone actually gains more, because, you know, you’ve all multiplied your money together. And so defectors can just sit there and free ride. And so there have to, and so people try to, you know, coordinate to punish the defectors. And then, like I said, this, this surprising dynamic emerges. So the intuition with cooperation is that what we’re looking for is we’re looking for what are sort of the high level parameters that make systems more metastable in terms of cooperation, so they generate cooperative equilibria, and they tend to incentivize the players in those systems to maintain those cooperative equilibria.
Okay? And is there any incentive to players outside the system to
also so, yeah. So, so there’s a whole thing around coalitions, and what the rules are about coalitions, and how costly you make joining coalitions, and how the players signal that they’re in a coalition, and making those signals like unthinkable or costly. But that’s like a bunch of complicated game theory stuff, but, but it’s short the short answer, yeah, the short answer is yes. There’s, there’s a lot of thinking about, how do you incentivize people to want to join cooperative clusters rather than try to do something else, free ride or defection clusters or it’s just attack, yeah, and dissolve the cooperation cluster.
That was the historical dilemma, right? Well,
so there’s a whole thing around, like the relative balance of offense and defense, right? So when defense is more powerful just due to the technology at the time, then obviously you’re going to skew towards capital formation. And when offense is favored, then you go towards banditry. Because,
I mean, historically, Central Asia was the repository of horse nomads just randomly blowing people up. So we combine,
so we combine this with the last thing that I was talking about, but that we’re sort of these extensionalist creatures, or we sort of extrapolate, we create these speculative so it’s extending, it’s wrapping this. It’s taking this idea of cooperation and extending it out beyond their current understanding of saying, Okay, we have some current level of understanding about cooperation, and presumably so I understand more about cooperation. My past self did, right? I’m able to cooperate with a broader range of people under a broader range of circumstances, because I sort of understand the structure of cooperation, and I understand how other people work. And so I can extend that into the future and say, okay, my future self is going to understand more about cooperation than I do be better at coalition building, etc, etc. And we can also think about that sort of a scale for your way, right? So, like you already are a giant colony of super cooperators, right? So like, all of your cells figured out. The jump from single cellular organisms to multicellular took, like, a billion years, or something crazy like that, right? Immune
System also runs a totalitarian regime that keeps everything in check and murders those who don’t.
So, right, right? Yeah, the immune system is not fucking around. Yeah, exactly,
the super cooperators. And then there’s, you know, big daddy.
So it’s skill free. It’s operating both at, you know, lower levels, you being a colony of super cooperators, but also at higher levels, right? That’s, you know, you can see that civilizational, there’s certain civilizational parameters that might contribute to being better or worse at sort of generating these cooperative equilibria. And so we can extract that into the like our own future, but also just more broadly, like you can imagine a. A turning into a, you know, super intelligent civilization that is able to do way more with this, have way more understanding than we do of cooperation. And you can also just imagine, like, the space of all possible super intelligent civilizations that and so you can extrapolate out, and, you know, whatever larger means, right? Even even even the concept of larger is sort of like, well, that’s at my current level of development. I have this certain notion of what it means to be like, larger and more powerful, right? And maybe that’ll change, because it turns out the universe is quite different than I thought. Just like, you know, our understanding is very different from the past and so you but you can imagine that there’s some sort of, like, largest or more, most powerful cluster individual civilizations. What have you? Jupiter brains that are able to cooperate with each other, able to coordinate with each other. And the hypothesis goes that this largest cluster of cooperators should be the largest available clusters of defectors, simply by the nature of defection and cooperation itself. So, you know, defectors can build coalitions, but they’re always limited by trying to figure out what the best level of abstraction to defect on is, right? So, you know, cancer is not very good at coordinating with itself to not just kill the creature that it’s that it’s growing in, and so it screws itself over, right? Because then once the creature dies, it does. You can imagine that if it could coordinate better, it would want to, so that it could, it could survive longer and maximize cancer, or whatever,
I mean, at that point, its interests just straight up line up with the creatures, right? So
this also cuts back to the complexity prior, right? So the super cooperation cluster, it doesn’t want to just tile, right? It doesn’t just take whatever the best presumably, doesn’t take whatever the best configuration is and just tile the universe whatever available resources. With that. The whole point would be that there’s a diversity of minds exploring different parts of the space of possible minds. Cooperation is
only necessary if you’re not, in fact, trying to convert the universe into that brains, in a
sense, yeah, yeah, exactly so. So you’re cooperating with things, with with other processes that you think are able to explore a different part of the sort of space that might be the physical universe, or might be this conscious space of conscious minds, right space of possible conscious experiences, and you’re like, oh, okay, I might learn something that I would not have been able to figure out on my own by interfacing with these other processes, doing other things. All right, that’s, that’s the intuition. Yeah, I can, I can talk about sort of the Buddha nature,
okay? And attractors,
yeah, tractors in mind, space. Okay, so we had the thing before about being the bridge between is and ought, that you already are this thing. There’s the famous ship of thessus, right? So if you replace each part of the ship, then are you still the same ship? Blah, blah, blah, it’s just board games. There’s also the ship of neuroth, which is another philosopher who named this Avalon neuroth. I don’t know if I’m passing the name, right, but he said, You’re a ship of thessus at sea, so you’re replacing parts, and this ship has to remain seaworthy the whole time, right, right? So, and coin called this the ship of neuroth. So when we transplant to this bridge between is and off, then I’m saying it’s sort of like you’re the bridge of Theseus. So you’re replacing your heuristics for how you how you figure out what is the case and what ought to be the case. But in the ship metaphor, the nice thing about that is this idea of navigating, of trying to figure out where you should be aiming your ship. And so the super cooperation cluster idea, to me, is sort of like a lighthouse. And what that means is that, you know, you don’t steer directly towards lighthouses. That’s not the point of light. That’s not how lighthouses work. You really, but really you, but you sail, you can use a lighthouse to navigate, and hopefully it gets you far enough that you can spot the next Lighthouse off in the distance. So, so the super carbon cluster, the idea is that that that’s only the idea that I can conceive of from where I’m standing now. So you can imagine that, that, you know, the multicellular organism is not able to imagine all of the things that humans gonna be able to accomplish once you have these giant colonies of trillions of cells, being able to coordinate all these fantastic ways and generate something like a mammalian human brain that can learn things that you know the bacteria is completely beyond it. So similarly, the idea here is we don’t need the absolute best version of this idea, we just need to intuit that there is a coherent navigational target out there and see if it gets us far enough to think of an even better version of the idea. So it’s a kind of bootstrapping process. So.
And the constraint is that the bootstrapping process has to, I mean, it’s a bootstrapping process all the way to the end, right until, yeah, so actually, that
that brings up a great point. So, so all the way to the end, like, what does it mean? Is there, is there a final target? And maybe there is. But from where we are now, when we talk about values, sometimes people get into this thing of they’re trying to figure out what are our real values, right? And it’s like they’re trying to figure out where’s the specific location we’re trying to get to, yeah? And so to me, the the idea of what’s called Buddha nature in Buddhism, it’s kind of like a vector. It’s not a specific place you’re trying to get to. It’s a direction that always points out of where you are, where you currently are, and points you to like the there being more like good stuff out there. So if you’re in the hell realm, it just sort of points out of the hell realm, right? And if you’re in the human realm, it just like points you like onwards. So, so a vector that. So let’s say that we take as our principle, you know, freedom or safety, or something, all right? Or like the balance between freedom safety, you know, whatever meta principle people whatever meta principle people come with, some legal, legal theory that still sort of grounds out in our current understanding. And that’s the equivalent of choosing a vector that, like, we sort of know, winds up in a sort of like a whirlpool, like a black hole, like an attractor, an attractor in the space of possible values. And so I’m saying what we can do to specify the Buddha Nature vector is we just take the vector that never does that. But like within within our horizon, as far as we can see, it just threads through all of the attractors. Maybe it gains energy from some of them, right? Maybe there’s good things to be taken from the various tractors, but it just never gets caught in any of them. And so, oh,
so the heuristic for the navigation schema is that it shouldn’t take your ship into a fucking Whirlpool. Yes,
yes. So. And so, you remember the super defectors, like the super defectors, they’re always trying to figure out, okay, yeah, but when are we gonna really defect? Like, like, you know, it’s all for the sake of, like, you know, you play a board game or something, and you you cooperate with other players for a while, but then ultimately, at some point you have to defect. And then this, one of the ideas the super cooperation cluster, is that you you have 9019 uncertainty of indexical uncertainty about, like, where you are in the journey, and like, what’s even happening, like, what’s even the real, you know, the is, right? Like, what even is the world I’m navigating? I don’t know. So therefore there is no, there never is going to be high enough certainty that you’re like, Oh, this is the level I should defect. I should just maximize freedom or whatever, right? That’s like, sort of choosing a final answer. And so this is, this is avoiding that, and saying You just never defect, like, all the way down and up the stack, as as you head, as you head up towards more complexity and more cooperation, you just don’t defect. You just keep assuming that there might be more uncertainty,
which can be demonstrated to be true about what level you’re working at, right?
Well, it’s like, it’s like maximizing is like turning yourself into cancer for that particular thing. It’s like, I’m going to be paperclip cancer, right? I’m just going to decide that paperclips are the thing, and then I’m going to tell the universe paperclips. And so again, we can extrapolate to the past, right? So if you were to take whatever your values were 15 years ago and put yourself into a polydeck that maximizes those values. That sounds it’s like, okay, sure, maybe the creature that emerges from that is like, Yeah, I’m pretty happy. Like, this seems good, but from your current perspective, you can see that that’s like a stunted creature. It’s like, not actually like all the dimensions of value that you’ve discovered since then are just not serves at all, yeah. So similarly, we can assume that as we sort of head towards this and as we increase in complexity, that we’re going to discover entire new avenues of value to explore. So, you know, a super intelligence that preserves option value, seems, seems good has so another way of saying that is, you know, the super intelligence has the Buddha nature.
So I mean this without minimizing your achievement, but this is by far the most rigorous and well grounded derivation of God and Heaven that I have ever come across. Yeah, thank you like, Yeah, this is not one of those. You know, this is where, haha, you just rediscovered heaven. There’s like, actually, no, this is well done. And you can imagine that there is a cluster out there somewhere. You can, you can make it even more real, instead of the super cooperation cluster being just a theoretical construct, you can imagine, no, they’re out there somewhere, yeah.
So what I didn’t mention, is time and space, right? So out there some when somewhere, oh, it’s in the it’s off in the future, those are, those are also concepts, right? So, like, if you’re familiar with a causal trade, as soon as you become aware that this largest cluster possibly exists somewhere, some when, somehow, you know, maybe not even in this universe, it’s just the obvious thing to do to like, I Yeah, the thing that I do is I attempt to coordinate with the Super Corporation cluster. Because that’s, I don’t know that’s the thing.
Presumably, they have some heuristics about coalition building, and it would be nice if, when we end. Countered them. We met those heuristics,
right? Yeah, yeah, the litmus test for when you have first contact is, did you already figure out the first contact protocols?
I see. So this is an interesting eschatology, like we don’t know how long we have until we meet God, but what we do until that time determines what happens when we do
well or like you already have to have the key by the time you you reach the gates.
It’s also in the Super Corporation clusters best interest to aggressively go after and root out the worst defection clusters. So we should expect that if there are hells that, well, what does the super Corporation cluster do with all of its time and all of its infinite computation, assuming that it’s actually competently operating? Well, probably hell dive right. Like, figure out, okay, I need to develop some sort of compact payload, like, you know, von Neumann Pro that, like, I can fire into a hell and it sort of like unfolds itself in such a way to, like, optimally antidote that hell problem. Hell realm being sort of a one of the whirlpools out there for like configurations of mine space that’s just like terrible for the minds that are in it. And there’s nothing to say to preclude that that’s not what this simulation is.
If this is a simulation, that’s not what this simulation is meaning, this could
be a simulation intended to do search on, how do you antidote, like a certain class of like hell generators, like unfriend AIS, right? Or just other hex of horms in general, right? It could be that this is a simulation to sort of like build figure
that out, yeah, if you could write this up poetically, this would be one of those little probes that
you could throw in, huh? That’s an interesting way to wrap it on itself. I like that. No,
this would be one of those just little, you know,
oh yeah. I do like the idea of Moloch as sort of the inverse of Buddha nature. It’s the thing. It’s the vector that points into attractors. And each step along the attractor, it tells you, oh yeah, going even farther into the tractor
would be better, right? The whole Carthaginian nonsense, Yep, yeah. I mean, though we do not look kindly upon the Romans or the Spanish, right, for their cruelties, you have to say they did, in fact, get rid of little, literal human sacrificing pits. It’s not that’s true. Well,
What have the Romans done for us lately? Well, they got rid of Carthage. Carthage was really fucking bad. Yeah, yeah. So, I mean, you asked what the problem to solve is? One of them is the problem of a process that sort of generates externalities onto Minds, Like, needlessly, it’s like, let’s just generate, you know, 100 billion copies of this mind that its architecture is kind of shitty. That’s a form of tiling. And tiling is like inherently bad because it’s, of course, it leads to more zero sum competition than non tiling. By definition, it’s processes that are want the same thing. Literally,
by definition, this might even be a prior, the diversity prior. So I include
this as part of the complexity prior. Okay, so it includes, so the complexity you have to have non similar
symmetry, yeah, right. Like, if there were a species that could only live on planets that were exactly like mercury, then they don’t fight with humans, yeah?
Yeah, yeah, the bad the bad gods. They don’t care about tiling, right? They’re perfectly happy to generate 100 million followers. Great. That’s much better than 10 million followers, but they’re not even paying attention to the fact that they just created an extra 90 million that are all competing for the same resource.
So even in some sense, agnosticism towards the complexity, prior, or the diversity part of the complexity prior is liable to potentially lead you at Estee, at least a little bit, if not entirely.
Yeah, I mean, so I think that, I think that people’s metaphysics sort of has a slot for the sort of, there’s a God slot right in the ontology, and something’s gonna live there. And if something has to live there, then at the very least you want to, if you have to be an attractor, in an attractor, you at least want it to be the attractor that says attractors are bad, and we should generate anti attractors,
right? Because presumably, in order for your navigational schema to stay consistent in the face of new information, it has to be an attractive, yeah,
so, so, I mean, we have to colossal on some ideas, right, right? Like these, these ideas we’re talking about right now, at least temporarily, right? But they’re non pernicious, right? They don’t intentionally try to trap you in them, yeah? They, it’s like good ideas point like, what I mean by anti attractor is a good idea points beyond itself,
right? They do not. Create the incentives which make it the only option. Let’s put it that way, yeah, they maintain that openness. Whereas molochan competition leads to just more molochen competition, it’s no it’s only when you get some pattern break, as in, oh, somebody discovers a new energy source, or resource source or something that you can, for a while, jump out of it, which
is good in a certain sense, right? Incentivizing the discovery of new new resources. But you don’t want it to be a blood tournament where all the you know what we’re gonna do on the search process, but it’s going to churn through 100 billion mines, and only one of them gets to win. Yeah, that
sounds not good. Yeah. Okay. I mean, you could, if you really wanted to make the argument, say that this is the simulation where they’re trying to see how many Buddhas. Like, what are the, you know, conditions that produce Buddhahood, and that’s why humans are so mortal. Like, it’s a little blood tournament, and they don’t want to waste resources on simulating minds beyond the point that they’re like, yeah, no, if you don’t get it within 30 to 50 years out with you. I mean, that’s
pretty explicit within some Buddhist cosmology, right? That this, this realm, is much more optimal for awakening than the long lived God realms, because just long that the pleasure doesn’t create any incentive towards awakening.
Yep, and it’s better than the hell realms, because you’re not just distract all the time. So how do you tie this back to the Darwin meme and the crisis of meaning that that has caused for both the IS and the odd people? But how does that solve
that problem? So again, I sort of see those memes as sort of building up from the IS, and I see this as sort of a sky hook of down from the ought. So the idea is that a a avoidant navigational schema doesn’t actually constrain you all that much, doesn’t really tell you what’s valuable to go after, whereas a positive navigation target. So, like, if it’s a negative navigation target, I know, stay away from these lists of things. This is, like, known to be highly inefficient, and also just sort of doesn’t work, and is also sort of just arbitrary. So you encounter one of these things, and then you just fleeing around the direction, whereas, if you have a positive navigation target, you can start charting around obstacles. You like, know which obstacles or which pits are worth crossing. You’re like, Okay, what’s going to be most efficient? Like, we’re going to get there, you know, going around this way, going around that way. You like, know that you’re trying to get there. And again, it’s not a concrete, final navigational target, it’s an interim one. So you’re like, you’re playing sort of loose with the rules, and you’re open to revising as you go. So it maintains the sort of flexibility and responsiveness to information in the environment. Which are, you know, desirable design criteria, great. How does this? Does this solve the Darwinian
the crisis of ought meaning from Darwin and the crisis of terrible is meaning from Darwin. I
think that. So. I think that the I think that the problem with the Darwin meme is that it makes it seem like the navigational target you should have is turn yourself into cancer, like, oh, well, just spreading maximally is the good thing,
whereas devalues all other values fundamentally, yes, yeah,
yeah. It’s, it’s Milwaukee, as all attractors ultimately, must be, in the end, if they’re, if you’re to fall into the very center of them and maximize the one thing it gives you something to do besides turn yourself into cancer. So as soon as you have cooperation, so we didn’t talk about it that much, but as soon as you have cooperation, as your as your target, and like the the idea of the complexity, the non tiling, like the non cancer, then it’s like, what would it look like to generate a super intelligent civilization that is able to coordinate across the broadest possible like the broadest possible space of mind architectures. This is like an extremely different target than just spread yourself maximally. It’s like a reason
got it. And so this means that the Darwin meme goes from something that was extremely afflicted, as in, this is what you ought to optimize for, to something you have to operate within the envelope of the same way that a company has to be profitable. But many companies are not started with profit as literally the only goal. Like nobody gives money to a guy saying, What does your company do? It’ll be profitable. Like, nobody does that. That’s just insane, right? And we can, we rightly consider that insane. So similar.
What’s the point of survival? Well, to survive even more. Yeah,
exactly. Yeah. So nobody, as I said, even if there is a very highly skilled startup founder right, there has to be some concrete thing that he’s doing beyond just profit. Similarly, it takes it from the optimization target to merely the envelope which constrains what actions you can do. And presumably, if you push homeostasis further, then you can do even while they’re further out there things
right, you can build, you can build even more exotic, potentially valuable mind architectures that can experience things that you couldn’t have imagined before, right, which then may, in turn open up additional avenues of more exploration of so
somehow these have to be bought within what you would call the selection envelope, like instead of aiming for. Maximizing yourself, you are now aiming to maximize the selection envelope, because the selection envelope is what constrains how complex or how complex your organisms can be, and what ranges of homeostasis you can
right, at least in at least so far in our universe, complexity is associated with fragility,
right? Well, interdependence between these agents. That’s why modularity and hierarchy are emergent,
and propagation speed and things like that.
So that is how it It shifts Darwin from the target to the envelope, and now you can push against that envelope in greater and greater directions. Yeah, yeah. Can we? Can we? Can we bring more things within the selective window so that they don’t, in some sense, die out? How can we do that and so on? Yep, okay, in one very real sense, what I think Carol Quigley might call the Open Society, the one that is open to possibilities. Seems he had some basic intuitions about this stuff, not obviously, I
think many people have intuitions about this. So like David Pierce’s Triple S civilization, you know, super intelligence, super longevity, super happiness. I think if you take some of those ideas for logical conclusion, which he does, you know, on his various websites, you see hints of, hints of a similar idea. And like I said when I was relating it to Buddha nature, I’m, I’m positing that that’s not just a metaphor, and that like there are important overlaps in a lot of the thinking that the various traditions have done, which, yeah, getting into all of that would be a whole another conversation. The fact that I talked about it in terms of attractors, you know, the what is? What I think the Buddhist talk about in terms of realms, because the realms also seem to be, these are possible mind architectures, right?