id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
92e1936e-d9c0-4a89-9956-fef93b72f3d6
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
AXRP Episode 14 - Infra-Bayesian Physicalism with Vanessa Kosoy
[Google Podcasts link](https://podcasts.google.com/feed/aHR0cHM6Ly9heHJwb2RjYXN0LmxpYnN5bi5jb20vcnNz/episode/YjI2MDQ0YTUtYjAyMy00YmI3LWE3MWMtYmFhMTU3MmM4MzJl)
This podcast is called AXRP, pronounced axe-urp and short for the AI X-risk Research Podcast. Here, I ([Daniel Filan](https://danielfilan.com/)) have conversations with researchers about their research. We discuss their work and hopefully get a sense of why it’s been written and how it might reduce the risk of artificial intelligence causing an [existential catastrophe](https://en.wikipedia.org/wiki/Global_catastrophic_risk): that is, permanently and drastically curtailing humanity’s future potential.
Late last year, Vanessa Kosoy and Alexander Appel published some research under the heading of “Infra-Bayesian physicalism”. But wait - what was infra-Bayesianism again? Why should we care? And what does any of this have to do with physicalism? In this episode, I talk with Vanessa Kosoy about these questions, and get a technical overview of how infra-Bayesian physicalism works and what its implications are.
Topics we discuss:
* [The basics of infra-Bayes](#infra-bayesics)
* [An invitation to infra-Bayes](#invitation-to-infra-bayes)
* [What is naturalized induction?](#what-is-naturalized-induction)
* [How infra-Bayesian physicalism helps with naturalized induction](#how-ibp-helps-nat-ind)
+ [Bridge rules](#bridge-rules)
+ [Logical uncertainty](#logical-uncertainty)
+ [Open source game theory](#osgt)
+ [Logical counterfactuals](#logical-counterfactuals)
+ [Self-improvement](#self-improvement)
* [How infra-Bayesian physicalism works](#how-ibp-works)
+ [World models](#world-models)
- [Priors](#priors)
- [Counterfactuals](#counterfactuals)
- [Anthropics](#anthropics)
+ [Loss functions](#loss-functions)
- [The monotonicity principle](#monotonicity-principle)
- [How to care about various things](#caring-about-various-things)
+ [Decision theory](#decision-theory)
* [Follow-up research](#follow-up-research)
+ [Infra-Bayesian physicalist quantum mechanics](#ibpqm)
+ [Infra-Bayesian physicalist agreement theorems](#ibp-aumann)
* [The production of infra-Bayesianism research](#ib-research-production)
* [Bridge rules and malign priors](#bridge-rules-malign-priors)
* [Following Vanessa’s work](#following-vanessas-work)
**Daniel Filan:**
Hello everybody. Today, I’m going to be talking with Vanessa Kosoy. She is a research associate at the Machine Intelligence Research Institute, and she’s worked for over 15 years in software engineering. About seven years ago, she started AI alignment research, and is now doing that full-time. Back in [episode five](https://axrp.net/episode/2021/03/10/episode-5-infra-bayesianism-vanessa-kosoy.html), she was on the show to talk about a sequence of posts introducing Infra-Bayesianism. But today, we’re going to be talking about her recent post, [Infra-Bayesian Physicalism: a Formal Theory of Naturalized Induction](https://www.alignmentforum.org/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized), co-authored with Alex Appel. For links to what we’re discussing, you can check the description of this episode, and you can read the transcript at [axrp.net](https://axrp.net/). Vanessa, welcome to AXRP.
**Vanessa Kosoy:**
Thank you for inviting me.
The basics of infra-Bayes
-------------------------
**Daniel Filan:**
Cool. So, this episode is about Infra-Bayesian physicalism. Can you remind us of the basics of just what Infra-Bayesianism is?
**Vanessa Kosoy:**
Yes. Infra-Bayesianism is a theory we came up with to solve the problem of non-realizability, which is how to do theoretical analysis of reinforcement learning algorithms in situations where you cannot assume that the environment is in your hypothesis class, which is something that has not been studied much in the literature for reinforcement learning specifically. And the way we approach this is by bringing in concepts from so-called imprecise probability theory, which is something that’s mostly decision theorists and economists have been using. And the basic idea is, instead of thinking of a probability distribution, you could be working with a convex set of probability distributions. That’s what’s called a [credal set](https://en.wikipedia.org/wiki/Credal_set) in imprecise probability theory. And then, when you are making decisions, instead of just maximizing the expected value of your utility function, with respect to some probability distribution, you are maximizing the minimal expected value where you minimize over the set. That’s as if you imagine an adversary is selecting some distribution out of the set.
**Vanessa Kosoy:**
The nice thing about it is that you can start with this basic idea, and on the one hand, construct an entire theory analogous to classical probability theory, and then you can also use this to construct a theory of reinforcement learning, and generalize various concepts, like regret bounds, that exist in classical reinforcement learning, or Markov decision processes. You can generalize all of those concepts to this imprecise setting.
**Daniel Filan:**
Cool. My understanding of the contributions… At least to what Infra-Bayesianism is… that you made was basically that it was a way of combining imprecise probability with an update rule with sequential decision-making in some kind of coherent way. Today, what I would want to do tomorrow if I learned that the sun was shining, once I wake up tomorrow, if the sun is actually shining, I still want to do that thing. Is that roughly a good way of explaining what the contribution is, at least in those original posts?
**Vanessa Kosoy:**
There are several aspects there. One aspect is the update rule. Here, I have to admit that an equivalent update rule has already been considered in [a paper by Gilboa and Schmiedler](https://www.sciencedirect.com/science/article/abs/pii/S0022053183710033), but they described it as partial orders over policies. They did not describe it in the mathematical language which we used, which was of those convex sets of so-called a-measures, and the dual form where we use concave functionals on the space of functions. They did not have that. We did contribute something to the update rule. And the other thing is combining all of this with concepts from reinforcement learning, such as regret bounds, and seeing that all of this makes sense, and the Markov decision processes, and so on. And the third thing was applying this to Newcomb paradox type situations to show that we are actually getting behavior that’s more or less equivalent to so-called [functional decision theory](https://arxiv.org/abs/1710.05060).
**Daniel Filan:**
Cool. One basic question I have about this. Last time, we talked about this idea of every day, you wake up, and there’s some bit you get to see. It’s zero or one. And you mentioned that one thing you can do with Infra-Bayesianism is, you can have some hypothesis about the even bits, but say you don’t know anything about the distribution of the odd bits, or how they relate to each other. And I was wondering if you could give us some sense of… We have this convex probability distribution, maybe over the odd bits. What would we lose if we just averaged over that convex set? What kind of behavior can we get with Infra-Bayesianism that we couldn’t get normally?
**Vanessa Kosoy:**
There are several problems with it. One thing is that there’s a technical problem that, if your space is infinite dimensional, then it’s very unclear what does it even mean to average over the convex set, but that’s more of a technicality. The real problem is that any kind of average brings you basically back into the Bayesian setting, and in the Bayesian setting, you only have guarantees for environments that match something in your prior. Only if you’re [absolutely continuous](https://en.wikipedia.org/wiki/Absolute_continuity#Absolute_continuity_of_measures) with respect your prior, then you can have some kind of guarantees, or sometimes, you can have some guarantees under some kind of ergodicity assumption. But in general, it’s very hard to guarantee anything if you just have some arbitrary environment. Here, we can have some guarantee. It’s enough for the environment to be somewhere inside the convex set that we can guarantee that the agent gets at least that much expected utility.
**Daniel Filan:**
Okay. Putting that concretely, just to see if I understand, [crosstalk 00:06:52]… Even and odd bits scenario is, I could say that, on all the odd bits, I’m 50-50 on whether the bit will be zero or one, and it’s independent and identically distributed on every odd day, whereas I could also have some sort of Infra-Bayesian mixture, or this set of distributions that could happen on the odd days. And it seems like, in terms of absolute continuity, one thing that could go wrong is, it could be the case that actually, on the odd days, one-thirds of the bits are one, and two-thirds of the bits are zero, and that wouldn’t be absolutely continuous with respect to my Bayesian prior, but it might still be in the convex set in the Infra-Bayesian setting. Am I understanding that correctly? Is that a good example?
**Vanessa Kosoy:**
Yeah. If your prior just does not help, you can assume that the things you don’t know are distributed uniformly, but in reality, they’re not distributed uniformly. They’re distributed in some other way. And then, you’re just not going to… What happens is that, suppose you have several hypotheses about what is going on with the even bits, and one of them is correct. And, with the odd bits, you don’t have any hypothesis that matches what is correct. Then, what happens with the Bayesian update is that you don’t even converge to the correct things on the even bits. The fact that the odd bits are behaving in some way which is not captured by your prior causes you to fail to converge to correct beliefs about the even bits, even though the even bits are behaving in some regular way.
An invitation to infra-Bayes
----------------------------
**Daniel Filan:**
All right. Cool. With that clarification, I’ve actually asked some people what they’d like me to ask in this interview, and I think a lot of people have the experience of… Maybe they’re putting off reading these posts because they seem a bit mathy, and they’re not sure what they would get out of reading them. I was wondering if you can toot your own horn a little bit, and say what kind of insights might there be, or what can you say that might tempt people to really delve in and read about this?
**Vanessa Kosoy:**
I think that Infra-Bayesianism is important because it is at the center of a lot of important conceptual questions about AI and about AI alignment, because in a lot of cases, the inability to understand how to deal with non-realizability, it causes challenges to naïve attempts to come up with models of things. And Newcombian paradoxes is one example of that, because Newcombian paradoxes are inherently non-realizable, because you have this Omega that simulates you, so you cannot be simulating Omega.
**Daniel Filan:**
Sure. And just in case people haven’t heard of that, by Newcombian paradoxes, I guess you mean situations where you’re making a decision, but some really smart agent has figured out what you would do ahead of time, and has changed your environment in ways that are maybe unknown to you by the time you’re making the decision. Is that right?
**Vanessa Kosoy:**
Yeah. Newcombian paradoxes means that your environment contains predictors, which are able to predict your behavior either ahead of time or in some other counterfactual and the things that depend on that. And that’s one type of situation where you have to somehow deal with non-realizability. And, things like just averaging over the convex set, it just doesn’t work. Another type of situation which is related to that is multi-agent scenarios, where you have several agents trying to make predictions about each other, and that’s also something where it’s not possible for all agents to be realizable with respect to each other, because that creates a sort of paradox. And this, of course, also has implications for when you’re thinking about agents that are trying to self-modify, or agents that are trying to construct smarter agents than themselves, all sorts of things like this. A lot of different questions you can ask can run up against problems that are caused by non-realizability, and you need to have some framework to deal with this.
What is naturalized induction?
------------------------------
**Daniel Filan:**
Okay. Cool. Moving on a bit, this post claims to be some kind of theory of naturalized induction. Can you first tell me, what is naturalized induction?
**Vanessa Kosoy:**
Yeah. Naturalized induction is a name invented by [MIRI](https://intelligence.org/), as far as I know, to the problem of, how do you come up with a framework for AI in which you don’t have this Cartesian boundary between the AI and its environment? Because classical models, or you can call it the [classical cybernetic model](https://en.wikipedia.org/wiki/Cybernetics), views AI as, you have an agent and you have an environment, and they’re completely separate. The agent can take actions which influence the environment, and the agent can see observations, in which sense the environment has some communication in the other direction, but there is some clear boundary between them, which is immutable. And this does not take into account a bunch of things, things like the fact that the agent was created at some point, and something existed before that. Things like that the agent might be destroyed at some point, and something different will exist afterwards, or the agent can be modified, which you can view as a special case of getting destroyed, but that’s still something that you need to contend with.
**Vanessa Kosoy:**
And it also brings in the problem where you need to have those so-called bridge rules. Because you’re working in this cybernetic framework, the hypotheses your agent constructs about the world, they have to be phrased from the subjective point of view of the agent. They cannot be phrased in terms of some bird’s eye view, or whatever. And that means that the laws of physics are not a valid hypothesis. To have a valid hypothesis, you need to have something like, “The laws of physics plus rules how to translate between the degrees of freedom that the laws of physics are described within to the actual observations of the agent”. And that makes the hypothesis much, much more complex, which creates a whole bunch of problems.
**Daniel Filan:**
And why does it make it more complex?
**Vanessa Kosoy:**
Think about it as, we have Occam’s razor… Right? … which is telling us that we should look for simple explanations, simple regularities in the world. And of course, Occam’s razor is the foundation of science and rational reasoning in general. But if you look at how physicists use Occam’s razor, physicists, they say, “Okay, here. This is quantum field theory, or string theory, or whatever. This is this really simple and elegant theory that explains everything.” But the theory doesn’t explain our observations directly. Right? The theory is not phrased in terms of what my eyes are going to see when I move my head two degrees to the right, or something. They’re described in terms of some fundamental degrees of freedom, quantum fields, particles, or whatever. And, the translation between quantum fields and what the camera or the eye’s going to see, or whatever, that’s something extremely complex.
**Vanessa Kosoy:**
And it’s not a simple and elegant equation. That’s something monstrous. And people have been trying to find frameworks to solve this. At least people have been a little bit trying. The classical framework for AI from the cybernetic Cartesian point of view is the so-called [AIXI](https://en.wikipedia.org/wiki/AIXI). There have been some attempts. For example, [Orseau and Ring had a paper called Space-Time Embedded Intelligence](https://www.cs.utexas.edu/~ring/Orseau,%20Ring%3B%20Space-Time%20Embedded%20Intelligence,%20AGI%202012.pdf), where they tried to extend this in some way which would account for these problems. But that did not really take off. There were problems with their solution. They haven’t really followed up much on that. This is the crux of the problem I’m trying to solve here.
**Daniel Filan:**
Okay. Can you give us a bit of a sense of why we should be excited for a solution to naturalized agency? Can we just approximate it as some Cartesian thing and not worry too much?
**Vanessa Kosoy:**
Like I said, the fact that we need to add a whole bunch of complexity to our hypothesis, the fact that we need to add those bridge rules, it creates a range of problems. One problem is, first of all, this mere fact should make us suspicious that something has gone deeply wrong, because if our starting point was Occam’s razor, we are assuming that simple hypothesis should be correct. And here, the hypothesis is not simple at all. Why is our simplicity bias even going to work? And when we start analyzing it, we see that we run into problems, because you can do thought experiments, like “What happens if you have an AI and someone throws a towel on the camera?” At this particular moment of time, all the pixels on the camera are black, because the towel is blocking all the light. Now, the AI can think and decide, “Hmm, what is the correct hypothesis… I had this hypothesis before, but maybe the correct hypothesis is actually, as long as not all the pixels are black, this is what happens, ordinary physics. And the moment all the pixels go black, something changes.” I don’t know. [The fine-structure constant](https://en.wikipedia.org/wiki/Fine-structure_constant) becomes different.
**Daniel Filan:**
Yeah. It could even become a simpler number.
**Vanessa Kosoy:**
Yeah. And from the perspective of scientific reasoning, this sounds like a completely crazy hypothesis, which should be discarded immediately. But, from the perspective of a Cartesian AI system, this sounds very reasonable, because it only increases the complexity by a very small amount, because this event of all the pixels becoming black, it’s a natural event from a subjective point of view. It’s an event which has very low description complexity. But from the point of view of physics, it’s a very complex event. From the point of view of physics, when we are taking this bird’s eye view, we’re saying, “There’s nothing special about this camera and about this towel. The fact that some camera somewhere has black pixels, that should not affect the fundamental laws of physics. That would be an extremely contrived modification of the laws of physics if that happened.”
**Vanessa Kosoy:**
We should assign extremely low probability to this, but the cybernetic agent does not assign low probability to this. Something is very weird about the way it’s reasoning, and this is one example. Another example is thinking about evolution. The theory of evolution explains why we exist, or we have theories about cosmology, which is also part of the explanation of why we exist, and those theories are important in the way we reason about the world, and they help us understand all sorts of things about the world. But for the Cartesian agent, those questions don’t make any sense, because a Cartesian agent defines everything in terms of its subjective experiences. Things that happen before the agent existed, or things that led to the agent to start existing, that’s just nonsense. It’s just not defined at all in the Cartesian framework. Those kinds of agents are just unable, ontologically incapable of reasoning along those lines at all, and that seems like an indication that something is going wrong with that.
How infra-Bayesian physicalism helps with naturalized induction
---------------------------------------------------------------
### Bridge rules
**Daniel Filan:**
Okay. And I was wondering… This problem of bridge rules first. Can you give us a sense informally, without going into the details, roughly how is Infra-Bayesian physicalism going to deal with this?
**Vanessa Kosoy:**
Yes. The way we are going to deal with this is saying that we don’t want our hypothesis to be describing the world from this subjective point of view, from the subjective perspective of the agent. We want our hypothesis to be describing the world from some kind of a, quote unquote, bird’s eye view. And then, the whole problem becomes, “Okay. But how do you deal with the translation between the bird’s eye view and the subjective?” Because the evidence that the agent actually has is the evidence of its senses. It’s evidence that exists in the subjective frame. And here, the key idea is that we’ll find some mathematical way to make sense of how the agent needs to look for itself inside the universe. The agent has a hypothesis that describes the universe from this bird’s eye view, and then it says, “I’m an agent with such and such source code that I know, and now I look into this universe, and I’m searching for where inside this universe this source code is running. And if the source code is running in a particular place and receiving particular inputs, then those inputs are what I expect to see if I find myself in this universe.” And this is how I measure my evidence against this bird’s eye view hypothesis.
**Daniel Filan:**
Okay. Cool. I also just want to go over… There are some things that I associate with the problem of naturalized agency, and I want to check which ones of these do you think infra-Bayesian physicalism is going to help me with? The first thing on my list was world models that include the self, and it seems like infra-Bayesian physicalism is going to deal with this.
**Vanessa Kosoy:**
Yeah. That’s true. Your world model is describing the world. There’s no longer a Cartesian boundary anywhere in there. The world definitely contains you, or, if the world does not contain you, then the agent’s going to say, “Okay, this is a world where I don’t exist, so I can safely ignore it,” more or less.
### Logical uncertainty
**Daniel Filan:**
Okay. The next one is, sometimes people talk about logical uncertainty under this umbrella of naturalized induction. Are we going to understand that better?
**Vanessa Kosoy:**
The way I look at it is that logical uncertainty is addressed to some extent by infra-Bayesianism, and then it’s addressed even better if you’re doing this thing which I’ve been calling Turing reinforcement learning, where you take an infra-Bayesian agent and say, “Okay, you also have some computer that you can play with that has a bunch of computational resources.” And this is actually the starting point for physicalism. When we go from this thing to physicalism, then I don’t think it adds anything about logical uncertainty qua logical uncertainty, but more about, “How do we use this notion of logical uncertainty that we’re getting from infra-Bayesianism, or infra-Bayesianism plus this Turing reinforcement learning thing? How do we use this notion of logical uncertainty and apply it to solve problems with naturalized induction?” So, it’s definitely related.
### Open source game theory
**Daniel Filan:**
Okay. Next thing on my list is open source game theory. That’s this idea that I might be playing a game, but you have my source code. So I don’t just have to worry about my actions, I have to worry about what you’ll reason about my actions because you have my source code. Is infra-Bayesian physicalism going to help me understand that?
**Vanessa Kosoy:**
I don’t think that’s a particularly… When I was thinking about this kind of multi-agent scenarios, I don’t really have a good solution yet, but I’m imagining that some kind of solution can come from just looking at some more classical Cartesian setting. It doesn’t feel like the physicalism part is really necessary here, because you can just imagine having several Cartesian agents that are interacting with each other, and that’s probably going to be fine. I’m not sure that the physicalism part actually adds something important to thinking about this.
**Daniel Filan:**
Yeah. It actually makes me wonder - just from the informal thing you said, there’s this idea that I have this world model and I look in it for my program being executed somewhere. In the open source game theory setting, it seems like my program’s being executed in two places. Maybe it’s not being executed, but it’s being reasoned about somehow. But is there a way of picking that up?
**Vanessa Kosoy:**
Yeah. There is certainly some relation in this regard, but I’m still not sure that it’s very important, because in some sense you already get that in infra-Bayesianism, right? With infra-Bayesianism, we can deal with those kind of Newcombian situations, where the outside world contains predictors of me, and that’s a very similar type of thing. It feels like going to physicalism mostly buys you something which looks like having… It’s not exactly having a better prior, but it’s similar to having a better prior. If you can formalize your situation by just assuming I have this prior which contains all the hypotheses I need and go from there, again, it doesn’t seem like you really need physicalism at this stage. And it seems like, at least for the initial investigation, understanding open source game theory should be possible with this kind of approach without invoking physicalism, as far as I can tell.
**Vanessa Kosoy:**
You do want to do physicalism if you are considering more specific examples of it, like acausal bargaining. You’re thinking of acausal bargaining with agents that exist in some other universes, or weird things like that.
**Daniel Filan:**
Sorry, what is acausal bargaining?
**Vanessa Kosoy:**
Acausal bargaining is the idea that, if you have two agents that do not have any causal access to each other, so they might be just really far from each other, or they might be literally in different universes that don’t have any physical communication channels with each other, they might still do some kind of cooperation, because those agents can imagine the other universe and imagine the agent in the other universe. And by having this sort of thinking, where each of them can reason abstractly about the other, they might strike deals. For example, I like apples, but there’s another agent which likes bananas. But my universe in which I live is not super good for growing apples, but it’s really good for growing bananas. With the other universe, the agent that likes apples believes the other universe … It has the opposite property. So we could both benefit if we start growing the fruit that the other agent likes in our own universe.
**Vanessa Kosoy:**
Assuming our utility function considers having bananas or apples in a different universe to be an actual gain. So this is like an example of acausal bargaining. So this is, in some sense, a special case of open source game theory, but here it becomes kind of more important to understand how do agents reason about this? How do they know that this other agent exists? What kind of weight they assign to these other universes or whatever. And when you go to this kind of questions, then physicalism definitely becomes important.
### Logical counterfactuals
**Daniel Filan:**
Okay. So the next thing on my list is logical counterfactuals. So sometimes people say that it’s important to be able to do counterfactuals where you’re like, this logical statement that I’ve proved already, what if it were false? I have reasoned about this program, and I know that it would do this thing, but what if it did this other thing instead? Is infra-Bayesian physicalism going to help me with that?
**Vanessa Kosoy:**
Yeah. So what’s happening here is… I mean, here my answer would be similar to the question about logical uncertainty. That in some sense, infra-Bayesianism already gives you some notion of logical counterfactuals, and this is what we’re using here. We’re using this notion of logical counterfacturals. There is some nuance here because, to get physicalism to work correctly, we need to be careful about how we define those counterfactuals.
**Vanessa Kosoy:**
And this is something we talk about in the article, but the core mathematical technique here is just using this fact about infra-Bayesianism. Because the basic idea is that in infra-Bayesianism you can have this [Knightian uncertainty](https://en.wikipedia.org/wiki/Knightian_uncertainty). And what we are doing here is saying, assume you have Knightian uncertainty about the outcome of a computation, then you can build counterfactuals by forming those logical conjunctions. In infra-Bayesianism you can define conjunctions of beliefs. So once you have Knightian uncertainty about whether this computation is going to be zero or one, you can take the conjunction, assume this computation is zero, and see what comes out from it. And in some sense, you can say that infra-Bayesian agents, the way they make decisions, is by the same kind of counterfactual, is by forming the conjunction with “what if I take this action”.
**Daniel Filan:**
Okay, cool. And Knightian uncertainty, am I right that that’s just when you don’t know what probability distribution you should assign, and so maybe use a convex set or something?
**Vanessa Kosoy:**
Yeah. Knightian uncertainly is instead of having a single probability you have, in your convex set, you have probability distributions which assign different probabilities to this event.
### Self-improvement
**Daniel Filan:**
Okay. And the last thing I wanted to ask about is self improvement. This problem of what would happen if I modified myself to be smarter or something - people sometimes want this under the umbrella of naturalized induction. Will infra-Bayesian physicalism help me with this?
**Vanessa Kosoy:**
Yeah. So, I mean, here also I feel like the initial investigation of this question, at the very least, can be done just with Turing reinforcement learning. Because if you are already able to have nonrealizability and you can reason about computations, then you can also reason about what happens if I build a new agent and this agent might be smarter than me, and so forth. These sort of questions are already covered by infra-Bayesianism, plus Turing reinforcement learning, in some sense. The physicalism part helps you when you get into trouble with questions that have to do with the prior. Questions like anthropic reasoning or acausal interactions between agents from different universes, and of course the bridge rules themselves, and so on and so forth. Questions where you can imagine some kind of a simplistic, synthetic toy model environment where you’re doing things, and abstract away dealing with actual physics and anything related to actual physics. Those are questions for which you don’t really need physicalism, at least on the level of solving them for some basic toy setting.
How infra-Bayesian physicalism works
------------------------------------
### World models
**Daniel Filan:**
Okay, great. Now that we’ve heard what infra-Bayesian physicalism is roughly supposed to do let’s move on a bit to how it works. So the first thing I want to talk about is the world models used in infra-Bayesian physicalism. Can you give me some more technical detail on what’s the structure of these world models that we’re going to be using?
**Vanessa Kosoy:**
So the world model is a joint belief about physics and about computations. So what does that mean? So physics is just, there is some physical universe, so there is some state space, or we can think of it as a timeless state space. Like, what are all the possible histories? What are like all the possible ways the universe can be across time and space and whatever? So there is some space which we define, which is the space of all things the universe can be.
**Vanessa Kosoy:**
And this space, it can depend on the hypothesis. So it can be anything. Each hypothesis is allowed to postulate its own space of physical states and then the other part is the computations. So that’s what we denote by Gamma. And this Gamma is mappings from programs to outputs. So we can imagine something like the set of all Turing machines, or the set of all programs for some universal Turing machine. Let’s assume that every Turing machine outputs a bit like zero or one, so mappings from the set of programs to the set {zero, one}, we can think of it as the set of computational universes, the set of ways that the universe can be in terms of just abstract computations and not physics.
**Daniel Filan:**
Okay. So this would include something like there’s some universe where when I run a program to check if the Riemann hypothesis is true. In some computational universe, that program outputs, yes, it is true. In some computational universe, it outputs no, it’s not true. Is that how I should sort of understand it?
**Vanessa Kosoy:**
Yeah. This is about right, except that, of course, in this specific case, the Riemann Hypothesis, we don’t have a single Turing machine that you can run and it will tell you that. You can only check it for every finite approximation or something.
**Daniel Filan:**
Okay. I actually wanted to ask about that because in the real world, there are some programs that don’t output anything. They just continue running. And I don’t necessarily know which programs those are. So do I also get to have uncertainty over which programs do actually output anything?
**Vanessa Kosoy:**
Yeah. You don’t necessarily know. And in fact, you necessarily don’t know, because of [the halting problem](https://en.wikipedia.org/wiki/Halting_problem). But yeah, this is a good question. And we initially thought about this, but then at some stage I just realized that you don’t actually need to worry about this because you can just have your computational universe assign an output to every program. And some programs in reality are not going to halt, and not going to produce any output, but you just don’t care about this. So if a program doesn’t halt, it just means that you’re never going to have empirical evidence about whether its output is zero or one. So you’re always going to remain uncertain about this, but it doesn’t matter, really. You can just imagine that it has some output and in reality, it doesn’t have any output, but since you don’t know what the output is anyway, it doesn’t really bother you.
**Daniel Filan:**
Okay. So it’s sort of like our models are containing this extra fact about these programs. We’re imagining that maybe once it runs for infinite time, is it going to output zero or one - seems like I’m entertaining that possibility. So we have these models, that are the computational universe and also the state space, right?
**Vanessa Kosoy:**
Yeah. And then your hypothesis is a joint belief about both of them. So it means that it’s an infra-distribution over the product Phi [physical states] times Gamma [computational universes]. And we use some specific type of infra-distributions in the post, which we called… Well, actually for technical reasons, we call them ultra-distributions in this post, but this is just an equivalent way of looking at the same thing. So just instead of infra-distributions and utility functions, we use ultra-distributions and loss functions, but that’s just a different notation, which happens to be more convenient.
**Daniel Filan:**
Okay. And infra-distributions are just these convex sets of distributions, right?
**Vanessa Kosoy:**
Well, yeah. What we call a crisp infra-distribution is just a convex set of distributions. A general infra-distribution is something slightly more general than that. It can be a convex set of those so-called a-measures, or it can be described as a concave functional over the space of functions. So it’s something somewhat more general. And specifically in this post, we consider so-called homogeneous ultra-distributions, which is a specific type. And this is something that you can think about as, instead of having a convex set of distributions, you have a convex set of what we call contributions, which are measures that can sum up to less than one. So the total mass is less than or equal to one and the set has to be closed under taking the smaller contribution. If you have a contribution, take one which is just lower everywhere, then it also has to be in the set. And those kinds of objects, we call them homogeneous ultra-distributions. And your usual, vanilla, convex set of infra-distributions is a special case of this.
**Daniel Filan:**
Okay. You said, we had this set of ultra-distributions over Phi times Gamma. The Gamma was the computational universe, and what’s Phi?
**Vanessa Kosoy:**
And Phi is the space of states of the physical universe. States or histories, or timeless states.
#### Priors
**Daniel Filan:**
Okay. So, that’s the model we’re using. So it seems like we’re putting this prior over what computations might have what outputs. Do you have a sense of what kind of prior I might want to use, and whether I might want it to be entangled with the physical world, because it seems like that’s allowed.
**Vanessa Kosoy:**
Yeah. You absolutely super want it to be entangled because otherwise it’s just not going to work. The sort of prior that you want to use here is a simplicity prior, because in some sense the whole point is having Occam’s Razor. So we haven’t actually defined explicitly what a simplicity prior is in this setting in the article, but it’s not difficult to imagine ways in which you could define it. For example, there’s something that I describe in [an old post of mine](https://www.alignmentforum.org/posts/5bd75cc58225bf0670375273/attacking-the-grain-of-truth-problem-using-bayes-savage) where you can construct this kind of prior over these convex sets by taking the [Solomonoff prior](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference) construction and instead of Turing machines, taking oracle machines and then the oracle is the source of Knightian uncertainty. And so it’s not hard to construct something analogous to the Solomonoff prior for this type of hypothesis. And this is what you should be using in some sense.
**Daniel Filan:**
All right. When, when you say they should be entangled, I guess that’s because a universe where I physically type in some program and my physical, actual computer outputs something, that should tell me about the computational universe, right?
**Vanessa Kosoy:**
Yeah. The entanglement is what tells you, which computations are actually running. So, if you’re running some computation on your computer and you don’t know what the result is going to be, then you have some uncertainty about the computation and you have some uncertainty about the number, which will appear on the screen, which is some property of the physical universe, and the two are entangled together, right? You know that whatever number appears on the screen is the actual output of this computation.
**Daniel Filan:**
Okay, and how much work do I have to do to specify that entanglement? Does that come in some sense for free or when I’m defining my models, do I need to carefully say which physical situations count as which computer programs?
**Vanessa Kosoy:**
You start with a prior, which is by default a non-informative prior. So something like [the Solomonoff prior](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference) so anything goes - maybe we have this entanglement, maybe we have that entanglement, maybe we have no entanglement, maybe we have whatever. The important thing is that the agent is able to use the empirical evidence it has in order to start narrowing things down, or updating away from this prior in some sense. Updating away in some sense, because the formalism is actually updateless. It just decides on some policy in advance. But in practice, the things you do when you see particular things, in a particular branch of the tree of things that could happen, would be dictated by some kind of subset of hypotheses, which seem more likely on this tree in some sense.
#### Counterfactuals
**Daniel Filan:**
Okay. And I guess that’s how I’m learning about which computations are being implemented, if I’m thinking of the updateful version. And you mentioned that we were going to have counterfactuals earlier. Can you say a little bit more about what exactly counterfactuals about computations are going to look like here?
**Vanessa Kosoy:**
Well, the basic idea is simple. The basic idea is if there’s a particular computation, then we can consider… We have some belief about computations or belief about universe times computation, and then there’s a particular computation and we want to consider the counterfactual that its output is zero. Then all we need to do is take a subset. So we had this convex set of distributions, or more precisely, convex set of contributions. And now all we need to do is take the subset of the contributions which assign probability zero to the other thing, which is not supposed to happen. And that’s our counterfactual. That’s the basic building block here.
**Daniel Filan:**
So when we’re talking a bit more specifically about that, how the self is fitting in, is the idea that I’ve got this physical world model and somehow I’m looking for parts which give evidence about the results of my computation? Or, what’s actually going on when I’m trying to locate myself in a world model in this setting?
**Vanessa Kosoy:**
Yeah. So the key mathematical object here is something that we call the bridge transform. And that’s actually what enables you to locate yourself inside this hypothesis or inside the universe described by this hypothesis. And the idea here is that the bridge transform is a formal way of looking at this hypothesis and exploiting the entanglement between the computational part and the physical part in order to say which computations are actually running in the physical universe, and which computations are not running.
**Vanessa Kosoy:**
The more precise way to think about it is there are some facts about the computational universe that the physics encodes, or knows, or whatever you like to call it. And you can describe this as some subset of Gamma, as some element of 2^Gamma [the set of subsets of Gamma], the subset of Gamma where the physical universe knows that the computational universe is somewhere inside that subset. And then the bridge transform starts with our hypothesis about Phi times Gamma and transforms it to a hypothesis about Phi times 2^Gamma times Gamma.
**Daniel Filan:**
Okay. And how does it do that?
**Vanessa Kosoy:**
Yeah. So, this connects again to the question of counterfactuals, because in some sense - what does it mean for the universe to be running a particular computation? The answer we give here, phrased in informal terms, is the universe runs a computation when the counterfactuals corresponding to the different outputs of this computation look differently in the physical world. So if I consider the counterfactual in which a certain program outputs zero versus the computation in which a certain program outputs one, and I look at how the physical universe looks like in those two counterfactuals, if the universe looks the same, then it means that it’s not actually running this program, or at least you cannot be sure that it’s running this program. Whereas, if they look completely differently, like two distributions with disjoint supports for example, then we’re sure that the universe is running this program. And there can also be an intermediate situation in which you have two distributions and they’re overlapping. And then the size of the overlap determines the probability with which the universe is running this program.
**Daniel Filan:**
Okay. So that’s the bridge transformation. And so basically we’re saying, does the universe look different depending on what the output of this program is. And it tells us which programs the universe is running. How does that help us locate the self?
**Vanessa Kosoy:**
Yeah. The way it help us locate the self is by using your own source code. If you are an agent and you know your own source code - and we know that at least in computer science world, it’s not hard to know your source code, because you can always use [quining](https://en.wikipedia.org/wiki/Quine_(computing)) to access your own source code in some sense - then you can ask, okay, my source code - so what does it mean, “your source code”? Your source code is a program that gets the history of your past observations and actions as input and produces a new action as output. So this thing, if the universe is running it with a particular input, then it means that I exist in the universe and I observe this input. Right? And conversely, if I’m an agent and I know that I have seen a certain input, then this allows me to say that, okay, this is information about the true hypothesis. We know that the true hypothesis has the property, that the universe is running the program, which is me, with this input.
**Daniel Filan:**
Okay. So this basically means that the equivalent of bridge rules is just that, I’m checking if my hypothesis about the universe, would it look any different depending on what I am, or what actions I would produce in response to given observations? Is that roughly right?
**Vanessa Kosoy:**
Yeah. It means that, suppose that I’m looking at a hypothesis and I want to know, is this hypothesis predicting that I’m going to see a red room? So I’m thinking, okay, suppose there’s the program which is me, which is getting an input which is the red room. And it has an output which says, should I lift my right hand? Or should I lift my left hand? And now I consider two counterfactuals: the counterfactual in which upon seeing the red room, I decide to lift my left hand; and the counterfactual in which upon seeing the red room, I decide to lift my right hand. And those are two computational counterfactuals, because I’m just thinking about this in terms of computation. There’s the computation, which is me, receiving the input, which is the red room, and producing the output, which is which arm to lift. So if in those two counterfactuals, the hypothesis says that the physical universe looks different, then this is equivalent to saying this hypothesis is actually predicting that I’m going to see a red room because the hypothesis says that the universe is running the program which is me, with an input which is a red room.
#### Anthropics
**Daniel Filan:**
Okay. So this is reminding me a bit of [anthropics](https://meteuphoric.com/anthropic-principles/). In particular, in anthropics, people sometimes wonder how you should reason about different universes that might have more fewer copies of yourself. So if I learned that there’s one universe where there’s only one person just like me versus another universe where there are 10 people like me, some people think I should consider those just as likely, where some people think, well the one where there are 10 times as many mes, I should think that I’m 10 times as likely to be in that one. If you’re just considering “does the universe look different depending on the outputs of my actions”, it sounds like you’re equally weighting universes in which there’s just one of me versus universes where there’s 10 of me, or it seems like it’s hard to distinguish those. Is that fair to say?
**Vanessa Kosoy:**
Yeah, it’s absolutely fair. So definitely the theory of anthropics that IB physicalist agents have is a theory in which the number of copies doesn’t matter. It doesn’t matter if the universe is running one copy of you with a certain input or 80 copies. That’s not even a well-defined thing. And it’s not so surprising that it’s not a well-defined thing, because if you believe that it should be a well-defined thing, then you quickly run into philosophical conundrums. [If I’m just using a computer with thicker wires, or whatever, to run the same computation, does it count as having more of copies of the AI?](https://www.nickbostrom.com/papers/experience.pdf) And the physicalist answer is there’s no such thing as number of copies. Either I exist or I don’t exist, or maybe the hypothesis thinks I exist with some probability. I mean, there might be different copies in the sense of different branches, right?
**Vanessa Kosoy:**
There is me now observing something and that there are different things I can observe in the future. And there are hypotheses which are going to predict that you will observe both. Like, I’m about enter a room and the room is going to be either green or red. And some hypotheses are going to say, well, the universe is running you with both inputs, both with the input ‘red room’ and the input ‘green room’. So in this sense, there are two copies of you in the universe seeing different things. But within each branch, given the particular history of observations, there’s no notion of number of copies that see this history.
**Daniel Filan:**
Okay. So you mentioned that you should be suspicious of the number of copies of things because of this argument involving computers with thick wires, can you spell that out? Why do thick wires plays a problem for this view?
**Vanessa Kosoy:**
Think of an AI. An AI is a problem running on a computer. So what does it mean to have several copies? So we can imagine having some computer, like a server standing somewhere, and then there’s another server in another room and we think of it as two copies. Okay, suppose that’s two copies, but now let’s say that the two servers are standing in the same room next to each other. Is this still two copies? Okay, suppose it is. Now let’s suppose that instead, this is just like a single computer, but for the purpose of redundancy, every byte that’s computed is computed twice to account for random errors from cosmic rays or whatever. Does this still count as two copies? At some point it’s getting really unclear where the boundary is between different copies and just the same thing. And it’s really unclear how to define it in general case.
### Loss functions
**Daniel Filan:**
Okay. Getting back to infra-Bayesian physicalism, we had these world models where you had this uncertainty over the computational universe and the physical universe and you also have this bridge transform which lets you know what computations are being run in a given universe. You can check if your computation is being run in some universe. Next, what I want to ask about is how you have loss functions. Because if I’m being an agent, I need models and I also need some kind of utility function or a loss function. Can you tell me what those will look like in the infra-Bayesian physicalist setting?
**Vanessa Kosoy:**
So this is another interesting question because in the cybernetic framework, the loss function is just a function of your observations and actions, and that in itself is another thing about the cybernetic framework which is kind of problematic. Because in the real world, we expect agents to be able to care or assign some importance to things that they don’t necessarily observe directly, right? So we can easily imagine agents caring about things that they don’t directly observe. I care about some person suffering, even though I don’t see the person. Or a paperclip maximizer wants to make a lot of paperclips, even if it doesn’t see the paperclips all the time. One problem with the cybernetic framework is that you can only assign rewards or losses to things that you observe.
**Vanessa Kosoy:**
In the physicalist framework it’s very different because your loss function is a function of Gamma times 2^Gamma. In other words, your loss function is a function of which computations are running, roughly speaking, and what are the outputs of these computations. This can encode all sorts of things in the world. For example, if there’s some computation in the world, which is a person suffering, then I can care about the universe running this computation.
**Daniel Filan:**
Okay. And basically that gets encoded - 2^Gamma tells me which computations are running in the world and I can care about having fewer computations like that or more computations like that.
#### The monotonicity principle
**Vanessa Kosoy:**
Yeah, only there is a huge caveat, and the huge caveat is what we call the monotonicity principle. The monotonicity principle is the weirdest and most controversial, unclear thing about the whole framework, because the monotonicity principle basically says that your loss function has to be monotonic in which programs are running. Physicalist agents, the way we define them, can only be happier if more programs are running and more upset if less programs are running. They can never have the opposite behavior. That’s kind of a weird constraint which we have all sorts of speculations about how to think about it, but I don’t think we have a really good understanding of how to think about it.
**Daniel Filan:**
Yeah. It sounds like that means that, if there’s some program that’s a happy dude living a fun life and if there’s some program that’s an unfortunate person who wishes they don’t exist, the monotonicity principle is saying I either prefer both of them happening or I dis-prefer both of them happening, but I can’t like one happening and dislike the other happening.
**Vanessa Kosoy:**
Yeah. You cannot have a preference which says this program is a bad program which I don’t want to run. This is something that’s not really allowed.
**Daniel Filan:**
Why do we have that principle in infra-Bayesianism? Why can’t you just allow any loss function?
**Vanessa Kosoy:**
So the reason you have this is because the bridge transform produces ultra-distributions, which are downwards closed. In simple terms, what this means is that you can never be sure that some program is not running. You can be certain that some program is running, but instead of being certain about the fact that some program is not running, the closest thing you can have is just Knightian uncertainty about whether it’s running or not. It’s always possible that it is running. Given this situation, it’s not meaningful to try to prefer for a program not to run because it always might be running. Because you resolve your Knightian uncertainty by checking the worst case, if you prefer this program not to run, the worst case is always that it runs and there’s nothing you can do about it.
**Vanessa Kosoy:**
The reason you cannot be certain that the program is not running is just how the bridge transform works. In some sense, it’s a consequence of the fact you can always have refinements of your state space. The state space Phi, which we discussed before, we stated it is the space of all the ways the universe can be, but described in what terms, right? We can describe the ways the universe can be in terms of tennis balls or people or bacteria, or in terms of atoms or quantum fields. You can have different levels of granularity. And the nice thing is that we have a natural notion of refinement. Given a hypothesis, we can consider various refinements, which are higher granularity descriptions of the universe, which are consistent with the hypothesis that we have on this coarse level.
**Vanessa Kosoy:**
The agent is always trying to find refinements of the hypothesis to test, in order to use this refinement to have less loss, or more utility. The thing is that, this process is not bounded by anything. The agent has Knightian uncertainty. As far as it knows, the description of the universe it has, it might be just a coarse-grained description of something other, more refined, which is happening in the background. And because you can always have more and more refinements, you cannot ever have any level of certainty that there is no longer any refinement. You can always have more problems running, because maybe the more refined description has additional programs running that the more coarse-grained description does not capture. It’s like, maybe the description we have in terms of quantum fields, maybe there are some sub-quantum-field thingys, which we don’t know about that encode lots of suffering humans, and we don’t know. That’s at least how you think if you’re an IB physicalist.
#### How to care about various things
**Daniel Filan:**
Okay. That does seem counterintuitive. Let’s put that aside for the moment. You said that the loss function was just in terms of which computational universe you’re actually in and what you know about the computational universe. Which element of Gamma and which element of 2^Gamma. There’s some intuition that it makes sense to want things about the physical state. In fact earlier you said you might want to have a utility function that just values creating paper clips, even if you don’t know about the paper clips. How do I phrase that in this setting?
**Vanessa Kosoy:**
So this is an interesting question. You could try to have loss functions that care directly about the physical state, but that quickly runs into the same problems that you were trying to solve. Then you end up again requiring some bridge rules as part of your hypothesis, because the things you care about are not the quarks, the things you care about are some complex macroscopic thingys. You end up requiring your hypothesis to have some bridge rules that would explain how to produce these macroscopic thingys, and this creates all the problems you were trying to avoid. The radical answer that physicalist agents have, or this kind of purist physicalist agents, they say “No. We are just going to be computationalist. We do not care directly about physical thingys, we only care about computations”.
**Vanessa Kosoy:**
If you want to be a physicalist paperclip maximizer, then what you need to do in this case is have some model of physics in which it’s possible to define a notion of paperclips and say, the computations that I care about are the computations comprising this model of physics. If there’s a computation that’s simulating a physical universe that has a lot of paperclips in it, that’s the computation that you want to be running. That’s what it means to be a paperclip maximizer if you’re a physicalist.
**Daniel Filan:**
Okay, cool. And what would it look like to have a selfish loss function where I want to be a certain way or I want to be happy?
**Vanessa Kosoy:**
If you’re doing a selfish loss function, then the computations you are looking at are your own source code. We had this thing before, which we used to define those counterfactuals, which is you know your own source code. So you’re checking whether the universe is running your source code with particular inputs. And that’s also something that you can use to build your loss function. You can say your loss function is that, I want the universe to run the source code which is me, with an input that says I’m taking a nice bath and eating some really tasty food.
**Daniel Filan:**
And pro-social loss functions would be something like me having a loss function that says, “I want there to be lots of programs that simulate various people, what they would do if they were in a nice bath and eating tasty food”. Is that roughly right?
**Vanessa Kosoy:**
You can think of programs which represent other people, or you can think about something like a program which represents society as a whole. You can think of society also as a certain computation where you have different people and you have the interactions between those people. All of this thing is some kind of a computation which is going on. So, I want the universe to be running this kind of computation with some particular thingys which make me like this kind of society, rather than the other kind of society.
**Daniel Filan:**
Another question I have is, one preference that it seems you might want to express is: if physics is this way, I want this type of thing; but if physics works some different way then I don’t want that anymore. You could imagine thinking, if classical physics exists then I really care about there being certain types of particles in certain arrangements. But if quantum physics exists, then I realize that I don’t actually care about those. I only cared about them because I thought they were made out of particles. Maybe in the classical universe, I cared about particles comprising the shapes of chairs, but in the quantum universe I want there to be sofas. I don’t want there to be chairs because sofas fit the quantum universe better. Can I have preferences like that?
**Vanessa Kosoy:**
In some sense you cannot. If you’re going with infra-Bayesian physicalism you’re committing to computationalism. So committing to computationalism means that there is no difference between a thing and a simulation of the thing. So [this](https://en.wikipedia.org/wiki/The_Treachery_of_Images#/media/File:MagrittePipe.jpg) is actually a pipe. If there’s some type of physical thing that you really want to exist, or someone is just running a simulation of this physical thing, from your perspective, that’s equally valuable. This is an interesting philosophical thing which you can find objectionable, or on the contrary you can find it kind of liberating in the sense that it absolves you from all the philosophical conundrums that come up with, what is even the difference between something running in a simulation and not in a simulation.
**Vanessa Kosoy:**
Maybe we are all a simulation that’s running inside quantum strings in some sense, but does it mean that we are not real? Does it really matter whether the universe is made of strings or quarks on a fundamental level, or something else? Should it change the extent to which I like trees and happy people, or whatever the thing I like is? If you’re a computationalist you say “I don’t care what’s the basic substrate of the universe. I just care about the computation.”
**Daniel Filan:**
It seems weird because it seems even hard to distinguish - I might have thought that I could distinguish, here’s one universe that actually runs on quantum mechanics, but it simulates a classical mechanical universe. That’s world A. In world B, it actually runs on classical mechanics, but there’s a classical computer that simulates the quantum world. It seems like in infra-Bayesian physicalism, my loss function can’t really distinguish world A from world B. Is that right?
**Vanessa Kosoy:**
That’s about right. So, you care about which computations are running, or what outputs they have, but you don’t care about the physical implementation of these computations at all.
### Decision theory
**Daniel Filan:**
Okay. So to wrap that up: together, we have these world models and we have these loss functions. Can you just reiterate how you make decisions given these world models and these loss functions?
**Vanessa Kosoy:**
The way you make decisions is by applying counterfactuals corresponding to different policies. You’re considering, what if I follow this policy, what if I follow that policy, and now you’re supposed to somehow compute what your expected loss is going to be and choose the policy which has the minimal expected loss. The way you actually do it, is you construct counterfactuals corresponding to different policies and the way you construct those counterfactuals is - well, to first approximation they’re just the same kind of logical or computational counterfactual we had before. They’re just saying, suppose that the computation which is me, is producing outputs, which are consistent with this policy, and let’s apply this counterfactual the way we always do it to our hypothesis or prior, and let’s evaluate the expected loss from that. But the thing that you actually need to do is slightly more tricky than that.
**Vanessa Kosoy:**
If you do this naïvely, then you run into problems where you are never going to be able to learn. You’re never going to be able to have good learning-theoretic guarantees. Because the problem with this is that if you’re doing things this way, then it means that you cannot rely on your memory to be true. You wake up with certain memories but you don’t know whether those memories are things that actually happened, or maybe someone just simulates you already having this memory. Because you don’t know whether those things actually happened, you cannot really update on them and you cannot really learn anything with certainty. But we can fix this by changing our definition of counterfactuals.
**Vanessa Kosoy:**
The way we change the definition is - informally, what it means is, I don’t take responsibility about the output of my own computation if it’s not continuous. If there is some continuous sequence of memories which is leading to a certain mental state, I then take responsibility of what I’m going to do in this mental state. But if something is discontinuous, if someone is just simulating me waking up with some weird memories, then I don’t take responsibility for what that weird simulation is going to do. I don’t use that in my loss calculus at all. I just consider this as something external and not under my control.
**Daniel Filan:**
Formally, what does that definition look like?
**Vanessa Kosoy:**
More formally, what happens is to each policy that you correspond some subset of 2^Gamma times Gamma, the subset of things which are consistent with this policy. In the naïve version that would be, let’s look at the computational universe Gamma and let’s look at the source code which is me and see that it outputs something consistent with the policy. In the more sophisticated version we’re saying, let’s also look at 2^Gamma and see with which inputs the universe is actually running me. Then we’re only going to apply our constraints to those inputs which have a continuous history. That’s one thing we need to do, and the other thing we need to do is related to this envelope. We haven’t really discussed it before, but there’s an important part where you’re doing this kind of Turing reinforcement learning thing.
**Vanessa Kosoy:**
Your agent has some external computer that it can use to do computational experiments, run all sorts of programs and see what happens. And there, you also need a corresponding guarantee that your computer is actually working correctly, right? Because if your computer has bugs and it’s just returning wrong things, then you cannot really update on seeing what it returns. It creates, again, problems with learning. And in order to not have that, you need to apply a similar fix, where you only apply the constraint of ‘your source code’s outputs are consistent with the policy’ on branches of the history in which you have only seen the computer saying true things.
**Daniel Filan:**
Okay. I can sort of see how you would operationalize that you only see the computer saying true things, but what does it look like to operationalize that there’s this continuous history of your program existing? If I go to sleep or go under general anesthesia, is that going to invalidate this condition?
**Vanessa Kosoy:**
That depends on your underlying model. In our formalism, the inputs to our source code, to our agent, are just sequences of actions and observations. If you go to sleep with some sequence of actions and observations and you wake up and the actions and observations continue from the same point - the things in between, they’re just not contributing to the sequence - that’s continuous as far as you’re concerned. The continuity is not physical continuity, but it’s more like logical continuity. Continuity means that someone runs your code with a certain observation, the action that you output on that observation - that’s also important by the way. Another type of thing, which is not allowed, is someone running you on memories of you doing something which you wouldn’t actually do. That’s also something which we exclude. So then we have observation, action leading from this observation, and another observation. And then you have a sequence of three observations, five, seven, and so on. Those form a continuous sequence. This is continuity. If you have some sequence on which someone is running you, but there is some prefix of the sequence on which the universe is not running you, then that’s considered not continuous.
**Daniel Filan:**
The point of this was so that you could prove loss bounds or something on agents. What loss bounds are you actually able to get in this setting?
**Vanessa Kosoy:**
We haven’t actually proved any loss bounds in this article, but the ideas show that you can prove at least some very simplistic loss bounds along the lines of: Let’s assume there is some experiment you can do, which can tell you in which universe you are, or some experiment which you can do, which can at least distinguish between some classes of universes in which you might be. And let’s further assume that this experiment in itself doesn’t carry any loss or almost any loss. Just committing to this experiment doesn’t cost you anything. Then, in this situation, the loss the agent will get would be as if it already knows in which universe or in which class of universes it exists from the start. Or a similar thing, which you can do with computational thingys is, assume that you can run some computation on your computer and the fact of running it in itself doesn’t cost you anything. Then you can get a loss bound, which corresponds to already knowing the result of this computation.
**Daniel Filan:**
That actually relates to another question that I wanted to ask, which is if you imagine implementing an agent in the infra-Bayesian setting, how tractably can you actually compute the outputs of this agent? Or is this going to be something like [AIXI](https://en.wikipedia.org/wiki/AIXI) where it’s theoretically nice, but you can’t actually ever compute it?
**Vanessa Kosoy:**
That’s definitely going to depend on your prior which is the same as with classical reinforcement learning. With classical reinforcement learning, if your prior is the Solomonoff prior, you get AIXI, which is uncomputable, but if your prior is something else, then you get something computable. You can take your prior to be some kind of a bounded Solomonoff prior, and then the result is computable, but still very, very expensive. Or, you can take your prior to be something really simple, like all MDPs with some number of states. And then the result is just computable in polynomial time in the number of states, or whatever. So the same kind of game is going on with infra-Bayesianism, depending on what prior you come up with, the computational complexity of computing the optimal policy or an approximately optimal policy is going to be different.
**Vanessa Kosoy:**
We don’t have a super detailed theory which already tells us what is happening in every case, but even in classical reinforcement learning without infra-Bayesianism we have large gaps in our knowledge of which exact priors are efficiently computable in this sense. Not to mention the infra-Bayesian case. I have some extremely preliminary results where you can do things like have infra-Bayesian versions of Markov decision processes and under some assumptions you can have policies which seem to be about as hard to compute as in the classical case. There is actually [some paper](https://arxiv.org/abs/2010.15020) which is not by me, which just talks about zero-sum games, reinforcement learning where you’re playing a zero-sum game, but actually can be shown to be equivalent to a certain infra-Bayesian thingy. They prove some kind of regret bound with - I think an efficient algorithm? I’m actually not 100% sure, but I think they have a computationally efficient algorithm. So at least in some cases you can have a computationally efficient algorithm for this but the general question is very much open.
Follow-up research
------------------
### Infra-Bayesian physicalist quantum mechanics
**Daniel Filan:**
Getting more to questions about extensions and follow-ups, what follow-up work to either infra-Bayesianism in general or infra-Bayesian physicalism in particular are you most excited about?
**Vanessa Kosoy:**
There’s a lot of interesting directions that I want to pursue with this work at some point. One direction which is really interesting is solving the interpretation of quantum mechanics. Quantum mechanics, you can say, why do we even care about this? Who cares about quantum mechanics, AI is going to kill us all! So I think that it’s interesting in the sense that it’s a very interesting test case. The fact that we’re so confused on the philosophical level about quantum mechanics seems to be an indication of our insufficiently good understanding of metaphysics or epistemology or whatever, and it seems to be pretty related to this question of naturalized induction, which we have been talking about.
**Vanessa Kosoy:**
So if Infra-Bayesian physicalism is a good solution to naturalized induction, then we should expect it to produce a good solution to all the confusion of quantum mechanics. And here I have some fascinating, but very preliminary work, which shows that - I think it can solve the confusion. Specifically, I have some concrete mathematical way in which I can build the infra-Bayesian hypothesis that corresponds to quantum mechanics. And I think that I can prove… Well, I kind of sketched it. I haven’t really written out enough detail to be completely confident, except for some simple, special cases. But I believe that I can prove that with this construction, the bridge transform reproduces all the normal predictions of quantum mechanics.
**Vanessa Kosoy:**
So if I’m right about this, then it means that I actually know what quantum mechanics is. I have a completely physicalist description of quantum mechanics. All these questions about what actually exists - does the wave function exist? Is the wave function only a description of subjective knowledge? What’s going to happen if you are doing a Schrödinger cat experiment on yourself? All of those questions are questions that I can answer now.
**Daniel Filan:**
Okay. And what does that look like?
**Vanessa Kosoy:**
So the way it looks like is basically that - well, in quantum mechanics, we have different observables and usually you cannot measure different observables simultaneously, unless they correspond to commuting operators. So what happens here is, the universe has some Hilbert space and it has some wave function, which is some state - pure, mixed, doesn’t matter - state on this Hilbert space. And then you have all the observables. And the universe is measuring all of them, actually. The universe is measuring all of them and the probability distribution over the outcomes of each observable separately is just given by the Born rule. But the joint distribution - and here is the funny thing, in the joint distribution, we just have complete Knightian uncertainty about what the correlations are. So we’re just imposing the Born rule for every observable separately, but we’re not imposing anything at all about their correlations.
**Vanessa Kosoy:**
And I’m claiming that this thing produces the usual predictions of quantum mechanics. And one way to think about it is, when you’re doing the bridge transform, and then you’re kind of doing this infra-Bayesian decision theory on what results, you’re looking at the worst case. And the worst case is when the minimal number of computation is running because of this monotonicity principle. So in some sense, what happens with quantum mechanics is, the decision-relevant joint distribution over all observables is the joint distribution which corresponds to making the minimal number of computations. It’s as if the universe is really lazy and it wants to run as little computations as possible while having the correct marginal distributions and that’s what results. So the interesting thing about this is, you don’t have any - there’s no multiverse here. Every observable gets a particular value. That’s just, you have some randomness, but it’s just normal randomness.
**Vanessa Kosoy:**
It’s not like some weird multiverse thing. There’s just only one universe. You can get some weird things if you are doing some weird experiment on yourself where you are becoming a Schrödinger cat and doing some weird stuff like that, you can get a situation where multiple copies of you exist. But if you’re not doing anything like that, you’re just one branch, one copy of everything. And yeah, that’s what it looks like.
**Daniel Filan:**
Sure. And does that end up - so in quantum mechanics, there’s [this theoretical result](https://en.wikipedia.org/wiki/Bell%27s_theorem) that says, as long as experiments have single outcomes and as long as they’re probabilistic in the way quantum mechanics says they are, you can’t have what’s called local realism, which is some sort of underlying true fact of the matter about what’s going on that also basically obeys the laws of locality, that also doesn’t internally do some faster-than-light or backwards in time signaling. And there are various parts of that you can give up, you can say, “Okay, well it’s just fine to not be local.” Or you can say, “Experiments have multiple results” or “there’s no underlying state” or something. It sounds like in this interpretation you’re giving up locality, is that right?
**Vanessa Kosoy:**
Yeah. It’s definitely not local in the sense that local is defined for the purpose of this theory. It’s absolutely not local. But what’s good about it is that it’s still, as far as I can tell, it’s still completely Lorentz invariant. So in this sense, it’s different from - there’s things like [the de Broglie-Bohm interpretation](https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory), which also gives up on locality, but then it also loses Lorentz invariance, which is really bad.
**Daniel Filan:**
And Lorentz invariance is basically, do you play well with special relativity, right?
**Vanessa Kosoy:**
Yeah. And here you don’t have any problem like that.
### Infra-Bayesian physicalist agreement theorems
**Daniel Filan:**
Okay. That’s interesting. Well, I look forward to reading more about that in future. Are there any other follow-ups to this work that you’re excited by?
**Vanessa Kosoy:**
Yeah. Well, there are a lot of interesting things I want to do with this work, like proving physicalist regret bounds of some kind in more detail. Or like, one thing that I kind of want to do is have some kind of an enhanced Aumann agreement theory for physicalists. Because what happens is that with the usual [Aumann agreement theorem](http://www.ma.huji.ac.il/~raumann/pdf/Agreeing%20to%20Disagree.pdf), you have the problem that it kind of assumes that all agents have the same prior, but if your agents are Cartesian, then in some sense, they don’t have the same prior. Because each agent has a prior which is defined from its own subjective point of view, and this can cause failures of agreement. And in particular, I claim that - [Paul Christiano](https://paulfchristiano.com/) has this thing, which is called [the Solomonoff Prior is Malign](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/), which can be summarized as, if your AI is doing something like the Solomonoff Prior, then it can reach the conclusion that the world is actually a simulation by some other intelligent agents.
**Vanessa Kosoy:**
And those other intelligent agents can actually cause your AI to do bad things or cause your AI to do things that they want it to do. And this scenario is a kind of failure of Aumann agreement in the sense that the AI sees the same evidence as you just from a different vantage point. But from the AI vantage point, it reaches the conclusion that it’s in a simulation, even though from your vantage point, you should not have reached the conclusion that it’s in a simulation. And I kind of conjecture that in infra-Bayesian physicalism you can prove some kind of an Aumann agreement-type theorem. That different agents inhabiting the same universe have this Aumann agreement because they don’t privilege their subjective points of view any more. And therefore this kind of failures cannot happen.
The production of infra-Bayesianism research
--------------------------------------------
**Daniel Filan:**
Okay. That’s interesting. So I guess another question is apart from follow ups, are there any complements to this line of research that you’re excited by? So things other people are doing that work really nicely with infra-Bayesianism or infra-Bayesian physicalism.
**Vanessa Kosoy:**
Well, I’m not super aware of things that people are actively doing and obviously work very nicely. I mean, just a couple of days ago, I read [this post](https://www.alignmentforum.org/posts/eqzbXmqGqXiyjX3TP/elk-thought-dump-1) by [Abram Demski](https://www.alignmentforum.org/users/abramdemski) in which he’s talking about his thoughts about [Eliciting Latent Knowledge](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit), which is this problem of: we have an AI and we’re trying to make the AI tell us everything that it knows about the world and tell us honestly. And a sub-problem that he discusses there is this issue where if you have different agents, they have a different subjective vantage point, so what does it even mean for them to talk about some shared truth? What does it mean for the AI to tell us honest answers if the AI’s truth is formulated in some completely different domain, which is its own subjective point of view. And Abram even mentions computationalism as a way to solve it.
**Vanessa Kosoy:**
And the nice thing here is that infra-Bayesianism actually formalizes computationalism and gives you this shared domain, which is which programs are running and what values do they take, which is a shared domain of truth between different agents that you can use to solve this philosophical issue.
**Daniel Filan:**
Okay. Interesting. I guess I’d like to talk a little bit about you as a researcher. So what does it look like for you to do research?
**Vanessa Kosoy:**
Well, I mean, my research is a very theoretical, mathematical type of research. There are several components to the process. One component is how do we translate the philosophical or informal problems that we care about into mathematics. And then there’s how do we solve the mathematical problems that result. And those two processes, they are living in some kind of closed loop with each other. Because if I come up with some mathematical model, then I analyze it and it leads to conclusions which don’t make sense in terms of the way I want to apply this mathematical model, then I know that I need to go back and revise my assumptions. And on the other hand, sometimes I’m playing with the math and something just comes out of playing with the math which can tell me that maybe it’s a good idea to make such and such assumption, or think about this informal concept in terms of this mathematical construct.
**Vanessa Kosoy:**
So there are two coupled processes. What’s guiding my thoughts is that there’s a goal. The goal is causing AI not to kill everyone. Right? And then there are sub-goals that we can derive from this goal, which are different problems that we want to understand better. Different confusions that we have that might have implications on our ability to think about AI safety, or just directly thinking about what kind of AI designs can have what kind of safety guarantees. And then those informal sub-problems, I’m then trying to come up with formal mathematical models for them. And my process here is to always try to start with the most simplistic model that’s not meaningless. If I have some informal problem and I’m trying to think of some kind of a model for it, I’m thinking let’s make the strongest possible simplifying assumptions that I can imagine under which I can do something, anything, as long as it doesn’t degenerate into something completely trivial.
**Vanessa Kosoy:**
If I can make a bunch of simplifying assumptions, and under those assumptions get a mathematical model in which I can prove something non-trivial - something which actually requires work to prove it and that doesn’t just follow directly from the definitions the way I define them - then it makes me feel like I’m making some progress. And once I have this kind of foothold, I can then go back and say, “Okay, so in this simplified toy world, we’ve solved the problem. Now let’s go back and see, how can we remove some of those simplifying assumptions.” And then hopefully step by step, you can kind of climb towards a solution which starts looking realistic.
**Vanessa Kosoy:**
And of course it looks like many convergent lines of research. There are many problems and eventually you need to solve all of them. And eventually they’re all kind of coupled and interacting with each other. But with each of them, you make a bunch of simplifying assumptions, and you manage to kind of divide it from the other problems. So there’s some kind of divide and conquer going on. But then as you start removing your simplifying assumptions, you also need to start merging those lines of research together until hopefully somewhere in the end will get a grand, big theory of AI and AI alignment, and we’ll be able to just solve everything we need.
Bridge rules and malign priors
------------------------------
**Daniel Filan:**
Nice. Let’s hope. So I’m about to wrap up, but before I do that, I want to ask, is there anything that you think I should have asked, but I didn’t?
**Vanessa Kosoy:**
I mean, one thing we haven’t really talked about is the whole malign prior thing and what are the implications of physicalism in this context.
**Daniel Filan:**
Sure. So what is the malign priors thing?
**Vanessa Kosoy:**
So the malign prior is a problem articulated by Paul Christiano. And recently there was also [this post](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign) by [Mark Xu](https://markxu.com/about) which gave some explanation, elaboration, and summarizing of [the previous post about that](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). And this problem is interesting because it seems really, really weird. It triggers the [absurdity heuristic](https://www.lesswrong.com/tag/absurdity-heuristic) really hard. But the more you think about it, the more convincing it seems, at least for me. So the gist of what is going on is the following: that in usual Cartesian approaches, we have those bridge rules. And those bridge rules produce very, very complex hypotheses. And this means that simulation hypotheses can sometimes start looking a lot more attractive than non-simulation hypotheses. Why? Because, imagine that there is some universe with some relatively simpler laws of physics that contains some kind of intelligent agents, and those intelligent agents want to take over our universe.
**Vanessa Kosoy:**
And they know that in our universe, we are building this AI, which is going to be really powerful. So what they might do is run a simulation of our AI seeing things in our universe, and they run it in a way such that it’s encoded in the most simple degrees of freedom that they can manage to control. They’re running the simulation and the output of the simulation is written into their analog of electron spins or whatever, something that’s fairly easy to describe in terms of basic physics, as opposed to things like pixels on the camera, which are very complex to describe. If they’re doing this thing, our AI can say, “Okay, what should I believe? Am I in this normal universe with the normal earth, but with these extremely complex bridge rules, which look completely contrived and arbitrary, and give a huge penalty to my hypothesis? Or maybe the correct thing is this simulation hypothesis with these much simpler bridge rules that come off from reading the output of the simulation off those electron spins or whatever?” That other thing now seems much, much more plausible. And what happens here is that the attackers -
**Daniel Filan:**
Sorry, wait. Why does the other thing seem more plausible?
**Vanessa Kosoy:**
Because the complexity is much lower, so we don’t have the complex bridge rules. We have some much simpler bridge rules, so the complexity can be lower.
**Daniel Filan:**
You’re penalizing complex hypothesis.
**Vanessa Kosoy:**
Yeah. And that can be super substantial. Maybe you shave down a hundred bits off your hypothesis complexity. So now you have a factor of 2^100 in the relative likelihood. So you can end up in a situation where your AI not just considers plausible, but is practically completely convinced that it’s in a simulation. The simulation hypothesis is just overwhelmingly likely from its perspective. And what’s happening here is that the attackers are exploiting that the AI is in a very, very special position in the universe. And the AI is thinking, “According to the normal hypothesis it’s just a completely random fact that I’m this agent in this place in the universe, as opposed to a different agent, or as opposed to just a random clump of dust somewhere in outer space. It’s just a completely random fact that I need to hard code into my hypothesis, that I’m this agent. Whereas from the perspective of the other hypothesis, there is a completely logical mechanistic explanation of why I’m seeing this. I’m seeing this because this is what the attackers want to attack.”
**Daniel Filan:**
Okay. And why do they count as attackers? Why is this bad, if they’re simulating the same things?
**Vanessa Kosoy:**
Well, the bad thing comes because once they’ve convinced our AI that it’s in a simulation, they can alter its predictions in arbitrary ways. When the AI is asking itself, “Okay, what do I expect to see in the future?” The simulators have complete control of that. The only limitation they face is that if they make a prediction which is not true in our universe, then once the AI sees that it’s not true, the simulation hypothesis has been falsified. But by this point it might be too late. They can carefully engineer this future prediction in such a way that will cause the AI to do something irreversible.
**Vanessa Kosoy:**
And from this point, it already doesn’t matter that the prediction will be falsified. In their simulation, the AI should replace itself with a different AI that has different source code. And if it does not do that, then something completely horrible happens in the simulation. If it does do that, then something absolutely amazing and wonderful happens. And the AI is just going to do this. And once the AI has done this, then it’s already too late. The fact that the future reveals that the amazing, wonderful thing that was supposed to happen didn’t actually happen doesn’t save us anymore because the AI has already replaced itself with some different AI, which is just doing whatever the attackers want it to do.
**Daniel Filan:**
Yeah. There are constructions of this where I think the simulating AI says, “Okay, this AI has to follow this exact, certain policy. It’s got to do all these things. If it ever deviates from that, incredibly terrible things are going to happen.” And you can just predict whatever you want. Because if it’s so terrible, you can convince the AI you’re attacking to just never deviate, so it never finds out. Seems a different spin on the same thing.
**Vanessa Kosoy:**
Yeah. In the reinforcement setting, the problem indeed gets worse, because like you said, counterfactual predictions can never be falsified. If you’re trying to avoid this by having your AI only do forecasting or something, like you’re trying to do some kind of [iterated distillation and amplification](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616) thing in which your AI is not actually doing reinforcement learning, but only doing prediction, then you’re still going to lose because of this other problem. The simulators are just going to make it output whatever prediction is going to cause irreversible consequences in our universe to benefit them.
**Daniel Filan:**
Okay. And basically the upshot is, this is why we should be worried about these bridge rule formalisms and this is why we should prefer something like infra-Bayesian physicalism, I guess.
**Vanessa Kosoy:**
Yes. With the caveat that maybe physicalism can also arise by itself in some sense. And this is something that we’re still a little confused about. So when we wrote the article, what we actually wrote there is that… One question you can ask is, okay, maybe if your agent is some kind of an AIXI or Cartesian agent, maybe it can just discover physicalism as one of the hypotheses in its prior. And from there on, it will just behave as a physicalist. And what happens is, if you are just trying to do this with AIXI, then you run into the problem that in order to define physicalism, the agent needs to use its own source code, but AIXI has infinitely long source code because it’s incomputable.
**Vanessa Kosoy:**
So this hypothesis is going to have infinite description complexity. But actually if instead of just AIXI, you’re thinking of Turing reinforcement learning, then you can kind of work around this, because the computer that you’re working on, it might be able to run some short versions of your source code. This is just a toy example, but if you’re AIXI, but your computer has a halting oracle, then your computer can implement AIXI as a short program and you can use that to define a physicalist hypothesis. So maybe certain types of Cartesian agents can - if they’re doing Turing reinforcement learning or whatever, but they’re still not doing the full physicalism thing, then maybe they can discover physicalism on their own. And that would ameliorate some of the problems, but you would still have some advantages if you just start off physicalist.
**Vanessa Kosoy:**
And even if it doesn’t actually matter, in practice, if you don’t actually need to explicitly make your agents physicalist, then it still seems really important if you want to analyze and understand what is actually going to happen because this shows you that the actual hypothesis your agent ends up with is something very different from what you could naïvely expect. So you really need to take this into account when you’re doing any kind of analysis.
Following Vanessa’s work
------------------------
**Daniel Filan:**
Okay. So that was the last question I was going to ask, so I guess we’re done now. So just while we’re wrapping up, if people listen to this interview and they want to learn more, they want to follow you and your work, how should they do that?
**Vanessa Kosoy:**
So the easiest thing is just to follow [me on Alignment Forum](https://www.alignmentforum.org/users/vanessa-kosoy). I post all of my work there, more or less. And if someone wants to ask me some concrete questions or is thinking of some collaboration or something, then I’m always happy to discuss AI Alignment things. They’re just welcome to email me at [email protected].
**Daniel Filan:**
Great. Well, thanks for appearing on the show, and to the listeners, I hope you’ll join us again.
**Vanessa Kosoy:**
Thank you for having me.
**Daniel Filan:**
This episode is edited by Jack Garrett. The opening and closing themes are also by Jack Garrett. The financial costs of making this episode are covered by a grant from the [Long Term Future Fund](https://funds.effectivealtruism.org/funds/far-future). To read a transcript of this episode, or to learn how to support the podcast, you can visit [axrp.net](https://axrp.net/). Finally, if you have any feedback about this podcast, you can email me at [[email protected]](mailto:[email protected]).
|
37f4892e-245a-448b-a3b1-c1c527b20197
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Encultured AI Pre-planning, Part 1: Enabling New Benchmarks
*Also available on the* [*EA Forum*](https://forum.effectivealtruism.org/posts/yczkGfcfWoRN6zfrf/encultured-ai-part-1-enabling-new-benchmarks)*.*
*Followed by:* [*Encultured AI, Part 2*](https://www.lesswrong.com/posts/2vxoTfuScspraSJeC/encultured-ai-part-2-providing-a-service) *(forthcoming)*
Hi! In case you’re new to Encultured AI, we’re a for-profit start-up with a public benefit mission: developing technologies promoting the long-term survival and flourishing of humanity and other sentient life. However, we also realize that AI poses an existential risk to humanity if not developed with adequate safety precautions. Given this, our goal is to develop products and services that help humanity steer toward the benefits and away from the risks of advanced AI systems. Per the “[Principles](https://www.encultured.ai/#about-us)” section of our homepage:
> Our current main strategy involves building a platform usable for AI safety and alignment experiments, comprising a suite of environments, tasks, and tools for building more environments and tasks. The platform itself will be an interface to a number of consumer-facing products, so our researchers and collaborators will have back-end access to services with real-world users. Over the next decade or so, we expect an increasing number of researchers — both inside and outside our company — will transition to developing safety and alignment solutions for AI technology, and through our platform and products, we’re aiming to provide them with a rich and interesting testbed for increasingly challenging experiments and benchmarks.
>
>
In the following, we’ll describe the AI existential safety context that motivated us to found Encultured, and go into more detail about what we’re planning to do.
What’s *trending* in AI x-safety?
---------------------------------
The technical areas below have begun to receive what we call “existential attention” from AI researchers, i.e., attention from professional AI researchers thinking explicitly about the impact of their work on existential safety:
* Trustworthiness & truthfulness — ensuring AI systems are telling us the truth and doing the things they and their creators say they’re going to do.
* Preference learning — enabling AI systems to learn what humans want.
* Interpretability — enabling humans to understand what AI systems are thinking and doing.
* Robustness & risk management — ensuring AI systems continue functioning well in novel situations, and quantifying the risk that they won’t.
In other words, the topics above lie in the intersection of the following Venn diagram:
**See** [**Appendix 1**](https://www.lesswrong.com/posts/PvuuBN39pmjw6wRpj/encultured-ai-part-1-appendix#Appendix_1___Trending__AI_x_safety_research_areas) for examples of research in these areas. More research in these areas is definitely warranted. A world where 20%+ of AI and ML researchers worldwide pivoted to focusing on the topics above would be a better world, in our opinion.
If our product is successful, we plan to grant access to researchers inside and outside our company for performing experiments in the areas above, interacting directly with users on our platform. And, our users will be aware of this ;) We’re planning on this not only because it will benefit the world, but because it will benefit our products directly: the most valuable tools and services are trustworthy, truthful, preference-sensitive, interpretable, and robust.
What’s *emerging*in AI x-safety?
--------------------------------
The following topics have received research attention from some researchers focused on existential safety, and AI research attention from other researchers, but to us the two groups don’t (yet) seem to overlap as much as for the ‘trending’ topics above.
* **Cooperative AI** — designing AI technologies in ways that enable improved cooperation between humans and AI systems, while preventing collusion between AI systems, i.e., cooperation between AI systems that would be harmful or deceptive to humanity. (see [Appendix 2](https://www.lesswrong.com/posts/PvuuBN39pmjw6wRpj/encultured-ai-part-1-appendix#Cooperative_AI) for related research.)
* **Multi-stakeholder control of AI systems —** allowing people with diverse values, such as from competing geopolitical factions, to share control of a single AI system. (see [Appendix 2](https://www.lesswrong.com/posts/PvuuBN39pmjw6wRpj/encultured-ai-part-1-appendix#Multi_stakeholder_control_of_AI_systems) for related research.)
Also see [Appendix 2](https://www.lesswrong.com/posts/PvuuBN39pmjw6wRpj/encultured-ai-part-1-appendix#Appendix_2___Emerging__AI_x_safety_research_areas) for a breakdown of why we think these areas are “emerging” in AI x-safety.
What’s *missing*?
-----------------
While continuing to advocate for the above, we’ve asked ourselves: what seems to be *completely missing* from research and discourse on AI existential safety? The following areas are topics that have been examined from various perspectives in AI research, but little or not at all from the perspective of x-safety:
1. **Life-aligned helpers**: Real-world living creatures, including humans, have numerous properties that distinguish them from abstract agents that are not embedded in the physical world. As such, it’s useful to experiment with AI systems assisting and caretaking for entities with some of the properties listed below.
* *Soft embodiment* — Humans are soft-bodied creatures! Robotics research in prosthetics, teleoperation, and surgery are the closest areas of AI research that address this aspect of human need, but research in these areas don’t usually consider their implications for x-safety.
* *Multi-scale health* — Humans can have health problems with their cells and organs, but can also have problems with mental health, unhealthy relationships, unhealthy communities, and even unhealthy geopolitical dynamics. We believe it is not a coincidence or mere metaphor that the concept of “health” is applied at all of these scales, and we want to enable benchmarks that test the ability to help people and living systems (e.g. communities) at multiple scales simultaneously.
Research in AI ethics and fairness can be viewed as addressing “health problems” at the scale of society, but these topics aren’t frequently examined from the perspective of x-safety.
* *Boundaries* — Humans and all natural living creatures maintain physical boundaries, such as cell membranes, skin, shelters (homes, offices), physical territories (e.g. private land, states), and even cognitive boundaries (e.g., accepted versus taboo topics). These boundaries may be treated as *constraints*, but they are more specific than that: they delineate regions or features of the world in which the functioning of a living system occurs. We believe many attempts to mollify the negative impacts of AI technology in terms of “minimizing side effects” or “avoiding over-optimizing” can often be more specifically operationalized as *respecting boundaries.* Moreover, we believe there are abstract principles for respecting boundaries that are not unique to humans, and that are simple enough to be transferable across species and scales of organization. The following sources of information:
+ Prof. Michael Levin’s research on organismal pattern homeostasis shows how some kinds of cancer — i.e., misaligned cellular behavior — can be caused and prevented through the closing and opening of intercellular gap junctions ([video presentation](https://www.youtube.com/watch?v=CDcgqVvojWU)). These effects persist in both absence and the presence of oncogenes. In other words, by stimulating the opening and closing of cellular gap junctions, but without changing the genomes of the cells, we can cause genetically cancerous cells to revert to healthy (non-cancerous) behavior, and cause healthy cells to form cancerous tumors. This means the mechanism of cancer is closely mediated by how cells manage their boundaries.
+ The late Prof. Jaak Panksepp wrote an excellent textbook, *Affective Neuroscience: the Foundations of Human and Animal Emotions*([amazon](https://www.amazon.com/Affective-Neuroscience-Foundations-Emotions-Science/dp/019517805X)), explaining how many aspects of mammalian emotions are shared across species, and rooted in shared neurological structures. Panksepp’s work is too much to summarize here, but Nick and I both found the book very compelling, and Nick’s paper with Dr. Gopal Sarma, “Mammalian Value Systems” ([arxiv, 2016](https://arxiv.org/abs/1607.08289)), argues that Panksepp’s insights should inform value alignment for AI. In particular, we now believe certain important aspects of human values are simple enough to be genetically encoded and shared across species, and among those values are emotional heuristics for managing boundaries between individuals, including nurturance, lust, playfulness, fear, anger, and separation anxiety.
+ Humans can learn to navigate the social boundaries of other species such as lions ([video](https://www.youtube.com/watch?v=hWFesO_kTRI)) and bees ([video](https://www.youtube.com/c/TexasBeeworks)). These individual successes have not been subject to academic study, so we cite them as illustrations of the patterns of cooperative boundary-management we believe are possible, rather than as strong sources of independent evidence.
* *Other complexities and imperfections —* Living systems subsystems are often suboptimal, and thus not easily described as “the optimal solution to X” for any simple optimization problem X. It’s important for AI systems to be able to assist and care for such systems, because we are such systems!
2. **Culturally-grounded AI:**A core difference between humans and other animals is our reliance on an exceptionally vast culture. This pervades all aspects of our behavior. As a central example, most animals communicate in a species-universal way (e.g., cats around the world use roughly the same kinds of body language), but humans communicate primarily through a wide variety of mutually unintelligible languages and movements acquired during long-term real-world interactions with existing language users.
Cultural acquisition is a large part of how humans align with one another’s values, especially during childhood but also continuing into adulthood. We believe attention to culture and the process of cultural acquisition is important in AI value alignment for several reasons:
* AI systems should be tested in simulations of simplified human-like cultures, rather than only in simulations of autonomous agents.
* AI systems attempting to serve human values would do well to model humans as engaging in a great deal of cultural acquisition amongst ourselves.
* AI could in principle be designed to acquire human culture in a manner similar to how humans acquire it.
* AI developers and AI systems should be cognizant of the potential to change human culture through interaction, so as to avoid triggering undesirable value drift.
To make sure these aspects of safety can be addressed on our platform, we decided to start by working on a physics engine for high-bandwidth interactions between artificial agents and humans in a virtual environment.
Recap
-----
We think we can create opportunities for humanity to safety-test future systems, by building a platform usable for testing opportunities. We're looking to enable testing for both popular and neglected safety issues, and we think we can make a platform that brings them all together.
In our next post, we'll talk about how and why we decided to provide a consumer-facing product as part of our platform.
*Followed By:*
[*Encultured AI, Part 1 Appendix: Relevant Research Examples*](https://www.lesswrong.com/posts/PvuuBN39pmjw6wRpj/encultured-ai-part-1-appendix)
[*Encultured AI Pre-planning, Part 2: Providing a Service*](https://www.lesswrong.com/posts/2vxoTfuScspraSJeC/encultured-ai-part-2-providing-a-service)
|
3df4ad95-82d2-4ecc-8b58-da54683ae6d3
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Spreading messages to help with the most important century
*Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.*
In the [most important century](https://www.cold-takes.com/most-important-century/) series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.
In [this more recent series](https://www.cold-takes.com/tag/implicationsofmostimportantcentury/), I’ve been trying to help answer this question: **“So what? What can I do to help?”**
So far, I’ve just been trying to build a picture of some of the major risks we might face (especially the [risk of misaligned AI](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) that [could defeat all of humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/)), what might be [challenging about these risks](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/), and [why we might succeed anyway](https://www.cold-takes.com/high-level-hopes-for-ai-alignment/). Now I’ve finally gotten to the part where I can start laying out tangible ideas for how to help (beyond the [pretty lame suggestions](https://www.cold-takes.com/call-to-vigilance/) I gave before).
This piece is about one broad way to help: **spreading messages** that ought to be more widely understood.
One reason I think this topic is worth a whole piece is that **practically everyone can help with spreading messages at least some,** via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. Call it slacktivism if you want, but I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s *OK to take these ideas seriously.*
And then there are a lot of potential readers who might have *special* opportunities to spread messages. Maybe they are professional communicators (journalists, bloggers, TV writers, novelists, TikTokers, etc.), maybe they’re non-professionals who still have sizable audiences (e.g., on Twitter), maybe they have unusual personal and professional networks, etc. Overall, the more you feel you are good at communicating with some important audience (even a small one), the more this post is for you.
That said, **I’m not excited about blasting around hyper-simplified messages.** As I hope this series has shown, the challenges that could lie ahead of us are complex and daunting, and shouting stuff like “AI is the biggest deal ever!” or “AI development should be illegal!” could do more harm than good (if only by associating important ideas with being annoying). Relatedly, I think it’s generally **not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea,** like “AI systems could harm society.” Some of the unintuitive details are crucial.
Instead, the **gauntlet I’m throwing is: “find ways to help people understand the core parts of the challenges we might face, in as much detail as is feasible.”** That is: the goal is to try to help people get to the point where they could maintain a reasonable position in a detailed back-and-forth, not just to get them to repeat a few words or nod along to a high-level take like “AI safety is important.”This is a **lot** harder than shouting “AI is the biggest deal ever!”, but I think it’s worth it, so I’m encouraging people to rise to the challenge and stretch their communication skills.
Below, I will:
* Outline some general challenges of this sort of message-spreading.
* Go through some ideas I think it’s risky to spread too far, at least in isolation.
* Go through some of the ideas I’d be most excited to see spread.
* Talk a little bit about how to spread ideas - but this is mostly up to you.
Challenges of AI-related messages
---------------------------------
Here’s a simplified story for how spreading messages could go badly.
* You’re trying to convince your friend to care more about AI risk.
* You’re planning to argue: (a) AI could be really powerful and important within our lifetimes; (b) Building AI too quickly/incautiously could be dangerous.
+ Your friend just isn’t going to *care* about (b) if they aren’t sold on some version of (a). So you’re starting with (a).* Unfortunately, (a) is easier to understand than (b). So you end up convincing your friend of (a), and not (yet) (b).
* Your friend announces, “Aha - I see that AI could be tremendously powerful and important! I need to make sure that people/countries I like are first to build it!” and runs off to help build powerful AI as fast as possible. They’ve chosen the [competition frame (“will the right or the wrong people build powerful AI first?”) over the caution frame](https://www.cold-takes.com/making-the-best-of-the-most-important-century/) (“will we screw things up and all lose?”), because the competition frame is [easier to understand](https://www.cold-takes.com/making-the-best-of-the-most-important-century/#why-i-fear-).
* Why is this bad? [See previous pieces](https://www.cold-takes.com/tag/implicationsofmostimportantcentury/) on the importance of caution.
(Click to expand) More on the “competition” frame vs. the “caution” frame”
In a [previous piece](https://www.cold-takes.com/making-the-best-of-the-most-important-century/), I talked about two contrasting frames for how to make the best of the most important century:
**The caution frame.** This frame emphasizes that a furious race to develop powerful AI could end up making *everyone* worse off. This could be via: (a) AI forming [dangerous goals of its own](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) and [defeating humanity entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/); (b) humans racing to gain power and resources and “[lock in](https://www.cold-takes.com/how-digital-people-could-change-the-world/#lock-in)” their values.
Ideally, everyone with the potential to build something [powerful enough AI](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:
* Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of [Pugwash](https://en.wikipedia.org/wiki/Pugwash_Conferences_on_Science_and_World_Affairs) (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.
* Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward [standards and monitoring](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#global-monitoring), etc. Slowing things down in this manner could buy more time to do research on avoiding [misaligned AI](https://www.cold-takes.com/making-the-best-of-the-most-important-century/#worst-misaligned-ai), more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity
**The “competition” frame.** This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.
* If something like [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.
* In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.
This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.
Some people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:
* Increasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.
* Supporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)
**Tension between the two frames.** People who take the "caution" frame and people who take the "competition" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
For example, people in the "competition" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the "caution" frame, haste is one of the main things to avoid. People in the "competition" frame often favor adversarial foreign relations, while people in the "caution" frame often want foreign relations to be more cooperative.
That said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. But I have a **general fear that the “competition” frame is going to be overrated by default** for a number of reasons, as I discuss [here](https://www.cold-takes.com/making-the-best-of-the-most-important-century/#why-i-fear-).
Unfortunately, I’ve seen something like the above story play out in **multiple significant instances** (though I shouldn’t give specific examples).
And I’m especially worried about this dynamic when it comes to people in and around governments (especially in national security communities)*,* because I perceive governmental culture as particularly obsessed with *staying ahead of other countries* (“If AI is dangerous, we’ve gotta build it first”) and comparatively uninterested in *things that are dangerous for our country because they’re dangerous for the whole world at once* (“Maybe we should worry a lot about pandemics?”)[1](https://www.cold-takes.com/p/fbae8068-6543-4776-af3b-bedab1d7b74a#fn1)
You could even [argue](https://twitter.com/michael_nielsen/status/1350544365198839808) (although I wouldn’t agree) that to date, efforts to “raise awareness” about the dangers of AI have done more harm than good (via causing increased investment in AI, generally).
So it’s tempting to simply give up on the whole endeavor - to stay away from message spreading entirely, beyond people you know well and/or are pretty sure will internalize the important details. But I think we can do better.
This post is aimed at people who are **good at communicating** with at least some audience. This could be because of their skills, or their relationships, or some combination. In general, I’d expect to have more success with people who hear from you a lot (because they’re your friend, or they follow you on Twitter or Substack, etc.) than with people you reach via some viral blast of memery - but maybe you’re skilled enough to make the latter work too, which would be awesome. I'm asking communicators to hit a high bar: leave people with strong understanding, rather than just getting them to repeat a few sentences about AI risk.
Messages that seem risky to spread in isolation
-----------------------------------------------
First, here are a couple of messages that I’d rather people *didn’t* spread (or at least have mixed feelings about spreading) in isolation, i.e., without serious efforts to include some of the other messages I cover [below](https://www.cold-takes.com/p/fbae8068-6543-4776-af3b-bedab1d7b74a#messages-that-seem-important-and-helpful-and-right).
One category is messages that generically emphasize the *importance* and *potential imminence* of powerful AI systems. The reason for this is in the previous section: many people seem to react to these ideas (especially when unaccompanied by some other key ones) with a “We’d better build powerful AI as fast as possible, before others do” attitude. (If you’re curious about why I wrote [The Most Important Century](https://www.cold-takes.com/most-important-century/) anyway, see footnote for my thinking.[3](https://www.cold-takes.com/p/fbae8068-6543-4776-af3b-bedab1d7b74a#fn3))
Another category is messages that emphasize that AI could be risky/dangerous to the world, without much effort to fill in *how*, or with an emphasis on easy-to-understand risks.
* Since “dangerous” tends to imply “powerful and important,” I think there are similar risks to the previous section.
* If people have a bad model of *how and why* AI could be risky/dangerous (missing key risks and difficulties), they might be too quick to later say things like “Oh, turns out this danger is less bad than I thought, let’s go full speed ahead!” [Below](https://www.cold-takes.com/p/fbae8068-6543-4776-af3b-bedab1d7b74a#ais-could-behave-deceptively), I outline how misleading “progress” could lead to premature dismissal of the risks.
Messages that seem important and helpful (and right!)
-----------------------------------------------------
### We should worry about conflict between misaligned AI and *all* humans
Unlike the messages discussed in the previous section, this one directly highlights why it might not be a good idea to rush forward with building AI oneself.
The idea that an AI could harm the *same humans who build it* has very different implications from the idea that AI could be generically dangerous/powerful. Less “We’d better get there before others,” more “there’s a case for moving slowly and working together here.”
The idea that AI could be a problem for the same people who build it is common in fictional portrayals of AI ([HAL 9000](https://en.wikipedia.org/wiki/HAL_9000), [Skynet](https://en.wikipedia.org/wiki/Skynet_(Terminator)), [The Matrix](https://en.wikipedia.org/wiki/The_Matrix), [Ex Machina](https://en.wikipedia.org/wiki/Ex_Machina_(film))) - maybe too much so? It seems to me that people tend to balk at the “sci-fi” feel, and what’s needed is more recognition that this is a serious, real-world concern.
The main pieces in this series making this case are [Why would AI “aim” to defeat humanity?](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) and [AI could defeat all of us combined](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/). There are many other pieces on the alignment problem (see list [here](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#fn3)); also see [Matt Yglesias's case](https://www.slowboring.com/p/the-case-for-terminator-analogies) for specifically embracing the “Terminator”/Skynet analogy.
I’d be especially excited for people to spread messages that help others understand - at a mechanistic level - *how and why* AI systems could end up with dangerous goals of their own, deceptive behavior, etc. I worry that by default, the concern sounds like lazy anthropomorphism (thinking of AIs just like humans).
Transmitting ideas about the “how and why” is a lot harder than getting people to nod along to “AI could be dangerous.” I think there’s a lot of effort that could be put into simple, understandable yet relatable metaphors/analogies/examples (my pieces make some effort in this direction, but there’s tons of room for more).
### AIs could behave deceptively, so “evidence of safety” might be misleading
I’m very worried about a sequence of events like:
* As AI systems become more powerful, there are some concerning incidents, and widespread concern about “AI risk” grows.
* But over time, AI systems are “better trained” - e.g., given reinforcement to stop them from behaving in unintended ways - and so the concerning incidents become less common.
* Because of this, concern dissipates, and it’s widely believed that AI safety has been “solved.”
* But what’s actually happened is that the “better training” has caused AI systems to *behave deceptively* - to *appear* benign in most situations, and to cause trouble only when (a) this wouldn’t be detected or (b) humans can be [overpowered entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/).
I worry about AI systems’ being deceptive in the same way a human might: going through chains of reasoning like “If I do X, I might get caught, but if I do Y, no one will notice until it’s too late.” But it can be hard to get this concern taken seriously, because it means attributing behavior to AI systems that we currently associate exclusively with humans (today’s AI systems don’t really do things like this[4](https://www.cold-takes.com/p/fbae8068-6543-4776-af3b-bedab1d7b74a#fn4)).
One of the central things I’ve tried to spell out in this series is *why* an AI system might engage in this sort of systematic deception, despite being very unlike humans (and not necessarily having e.g. emotions). It’s a major focus of both of these pieces from this series:
* [Why would AI “aim” to defeat humanity?](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/)* [AI Safety Seems Hard to Measure](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/)
Whether this point is widely understood seems quite crucial to me. We might end up in a situation where (a) there are big commercial and military incentives to rush ahead with AI development; (b) we have what *seems like* a set of reassuring experiments and observations.
At that point, it could be key whether people are asking tough questions about the many ways in which “evidence of AI safety” could be misleading, which I discussed at length in [AI Safety Seems Hard to Measure](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/).
(Click to expand) Why AI safety could be hard to measure
In previous pieces, I argued that:
* If we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that:
+ These AIs will develop [unintended aims](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) (states of the world they make calculations and plans toward, as a chess-playing AI "aims" for checkmate);
+ These AIs could deceive, manipulate, and even [take over the world from humans entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) as needed to achieve those aims.
+ People today are doing AI safety research to prevent this outcome, but such research has a [number of deep difficulties:](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/)
| |
| --- |
| **“Great news - I’ve tested this AI and it looks safe.”** Why might we still have a problem?
|
| *Problem* | *Key question* | *Explanation* |
| The **Lance Armstrong problem** | Did we get the AI to be **actually safe** or **good at hiding its dangerous actions?** | When dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “*appearing* to behave well.”
When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.
|
| The **King Lear problem** | The AI is **(actually) well-behaved when humans are in control.** Will this transfer to **when AIs are in control?** | It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.
AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to [take control of the world entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/). It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.
Like King Lear trying to decide how much power to give each of his daughters before abdicating the throne.
|
| The **lab mice problem** | **Today's "subhuman" AIs are safe.**What about **future AIs with more human-like abilities?** | Today's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.
Like trying to study medicine in humans by experimenting only on lab mice.
|
| The **first contact problem** | Imagine that **tomorrow's "human-like" AIs are safe.** How will things go **when AIs have capabilities far beyond humans'?** | AI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more.
Like trying to plan for first contact with extraterrestrials (this barely feels like an analogy).
|
An analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” [analogy](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo):
> Imagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).
>
>
>
>
>
> You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. ([More](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo))
>
>
If your applicants are a mix of "saints" (people who genuinely want to help), "sycophants" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and "schemers" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?
More: [AI safety seems hard to measure](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/)
### AI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems
I’ve [written about](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#global-monitoring) the benefits we might get from “safety standards." The idea is that AI projects should not deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime: AI systems could be audited to see whether they are safe. I've outlined how AI projects might self-regulate by publicly committing to having their systems audited (and not deploying dangerous ones), and how governments could enforce safety standards both nationally and internationally.
Today, development of safety standards is in its infancy. But over time, I think it could matter a lot how much pressure AI projects are under to meet safety standards. And I think it’s not too early, today, to start spreading the message that **AI projects shouldn’t unilaterally decide to put potentially dangerous systems out in the world; the burden should be on them to demonstrate and establish safety before doing so.**
(Click to expand) How standards might be established and become national or international
I [previously](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#global-monitoring) laid out a possible vision on this front, which I’ll give a slightly modified version of here:
* Today’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s [2018 statement](https://www.theweek.in/news/sci-tech/2018/06/08/google-wont-deploy-ai-to-build-military-weapons-ichai.html), "We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”).
+ Even if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to.
+ Even if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that [certain evidence](https://www.cold-takes.com/p/fbae8068-6543-4776-af3b-bedab1d7b74a#ais-could-behave-deceptively) is *not* good enough could go a long way.* As more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.
* Eventually, similar principles could be incorporated into various government regulations and enforceable treaties.
* Governments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to [cyberwarfare](https://en.wikipedia.org/wiki/Stuxnet) or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.
### Alignment research is prosocial and great
Most people reading this can’t go and become groundbreaking researchers on AI alignment. But they *can* contribute to a general sense that the people who can do this (mostly) should.
Today, my sense is that most “science” jobs are pretty prestigious, and seen as good for society. I have pretty mixed feelings about this:
* I think science has been [good for humanity historically](https://www.cold-takes.com/rowing-steering-anchoring-equity-mutiny/#rowing).
* But I worry that as technology becomes more and more powerful, there’s a growing risk of a catastrophe (particularly via AI or bioweapons) that wipes out all the progress to date and then some. (I've [written](https://www.cold-takes.com/has-violence-declined-when-we-include-the-world-wars-and-other-major-atrocities/) that the historical trend to date arguably fits something like "Declining everyday violence, offset by bigger and bigger rare catastrophes.") I think our current era would be a nice time to adopt an attitude of “proceed with caution” rather than “full speed ahead.”
* I resonate with Toby Ord’s comment (in [The Precipice](https://theprecipice.com/)), “humanity is akin to an adolescent, with rapidly developing physical abilities, lagging wisdom and self-control, little thought for its longterm future and an unhealthy appetite for risk.”
I wish there were more effort, generally, to distinguish between especially dangerous science and especially beneficial science. AI alignment seems squarely in the latter category.
I’d be especially excited for people to spread messages that give a sense of the specifics of different AI alignment research paths, how they might help or fail, and what’s scientifically/intellectually interesting (not just useful) about them.
The main relevant piece in this series is [High-level hopes for AI alignment](https://www.cold-takes.com/high-level-hopes-for-ai-alignment/), which distills a longer piece ([How might we align transformative AI if it’s developed very soon?](https://www.alignmentforum.org/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very)) that I posted on the Alignment Forum.
There are a number (hopefully growing) of other careers that I consider especially valuable, which I'll discuss in my next post on this topic.
### It might be important for companies (and other institutions) to act in unusual ways
In [Racing through a Minefield: the AI Deployment Problem](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/), I wrote:
> **A lot of the most helpful actions might be “out of the ordinary.”** When racing through a minefield, I hope key actors will:
>
>
>
> * Put more effort into alignment, threat assessment, and security than is required by commercial incentives;
>
> * Consider measures for [avoiding races](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#avoiding-races) and [global monitoring](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#global-monitoring) that could be very unusual, even unprecedented.
>
> * Do all of this in the possible presence of ambiguous, confusing information about the risks.
>
It always makes me sweat when I’m talking to someone from an AI company and they seem to think that commercial success and benefiting humanity are roughly the same goal/idea.
(To be clear, I don't think an AI project's only goal should be to avoid the risk of misaligned AI. I've given this risk a central place in this piece partly because I think it's especially at risk of being too quickly dismissed - but I don't think it's the only major risk. I think AI projects need to strike a tricky balance between the [caution and competition frames](https://www.cold-takes.com/p/fbae8068-6543-4776-af3b-bedab1d7b74a#Box1), and consider a number of issues [beyond the risk of misalignment](https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/). But I think it's a pretty robust point that they need to be ready to do unusual things rather than just following commercial incentives.)
I’m nervous about a world in which:
* Most people stick with paradigms they know - a company should focus on shareholder value, a government should focus on its own citizens (rather than global catastrophic risks), etc.
* As the [pace of progress accelerates](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#explosive-scientific-and-technological-advancement), we’re sitting here with all kinds of laws, norms and institutions that aren’t designed for the problems we’re facing - and can’t adapt in time. A good example would be the way [governance](https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/) works for a standard company: it’s legally and structurally obligated to be entirely focused on benefiting its shareholders, rather than humanity as a whole. (There are alternative ways of setting up a company without these problems)
At a minimum (as I [argued previously](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/)), I think AI companies should be making sure they have whatever unusual governance setups they need in order to prioritize benefits to humanity - not returns to shareholders - when the stakes get high. I think we’d see more of this if more people believed something like: “It might be important for companies (and other institutions) to act in unusual ways.”
### We’re not ready for this
If we’re in the [most important century](https://www.cold-takes.com/most-important-century/), there’s likely to be a vast set of potential challenges ahead of us, most of which have gotten very little attention. (More here: [Transformative AI issues (not just misalignment): an overview](https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/))
If it were possible to slow everything down, by default I’d think we should. Barring that, I’d at least like to see people generally approaching the topic of AI with a general attitude along the lines of “We’re dealing with something really big here, and we should be trying really hard to be careful and humble and thoughtful” (as opposed to something like “The science is so interesting, let’s go for it” or “This is awesome, we’re gonna get rich” or “Whatever, who cares”).
I’ll re-excerpt this table from an [earlier piece](https://www.cold-takes.com/call-to-vigilance/#sharing-a-headspace):
| | |
| --- | --- |
| **Situation** | **Appropriate reaction (IMO)** |
| "This could be a billion-dollar company!"
| "Woohoo, let's GO for it!"
|
| "This could be the most important century!"
| "... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one."
|
I’m not at all sure about this, but one potential way to spread this message might be to communicate, with as much scientific realism, detail and believability as possible, about what the world might look like after explosive scientific and technological advancement brought on by AI (for example, a world with [digital people](https://www.cold-takes.com/how-digital-people-could-change-the-world/)). I think the enormous unfamiliarity of some of the [issues](https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/#new-life-forms) such a world might face - and the vast possibilities for [utopia](https://www.cold-takes.com/tag/utopia/) or [dystopia](https://www.cold-takes.com/how-digital-people-could-change-the-world/#virtual-reality-and-control-of-the-environment) - might encourage an attitude of not wanting to rush forward.
How to spread messages like these?
----------------------------------
I’ve tried to write a [series](https://www.cold-takes.com/tag/implicationsofmostimportantcentury/) that explains the key issues to careful readers, hopefully better equipping them to spread helpful messages. From here, individual communicators need to think about the audiences they know and the mediums they use (Twitter? Facebook? Essays/newsletters/blog posts? Video? In-person conversation?) and what will be effective with those audiences and mediums.
The main guidelines I want to advocate:
* Err toward sustained, repeated, relationship-based communication as opposed to prioritizing “viral blasts” (unless you are so good at the latter that you feel excited to spread the pretty subtle ideas in this piece that way!)
* Aim high: try for the difficult goal of “My audience walks away really understanding key points” rather than the easier goal of “My audience has hit the ‘like’ button for a sort of related idea.”
* A consistent piece of feedback I’ve gotten on my writing is that making things as concrete as possible is helpful - so giving real-world examples of problems analogous to the ones we’re worried about, or simple analogies that are easy to imagine and remember, could be key. But it’s important to choose these carefully so that the key dynamics aren’t lost.
Footnotes
---------
---
1. [Killer Apps](https://www.foreignaffairs.com/articles/2019-04-16/killer-apps) and [Technology Roulette](https://www.cnas.org/publications/reports/technology-roulette) are interesting pieces trying to sell policymakers on the idea that “superiority is not synonymous with security.” [↩](#fnref1)- When I imagine what the world would look like without any of the efforts to “raise awareness,” I picture a world with close to zero awareness of - or community around - major risks from transformative AI. While this world might *also* have more *time* left before dangerous AI is developed, on balance this seems worse. A future piece will elaborate on the many ways I think a decent-sized community can help reduce risks. [↩](#fnref2)- I do think “AI could be a huge deal, and soon” is a very important point that somewhat serves as a prerequisite for understanding this topic and doing helpful work on it, and I wanted to make this idea more understandable and credible to a number of people - as well as to [create more opportunities to get critical feedback and learn what I was getting wrong](https://www.cold-takes.com/where-ai-forecasting-stands-today/#reason-2-cunninghams-law).
But I was nervous about the issues noted in this section. With that in mind, I did the following things:
* The title, “most important century,” emphasizes a time frame that I expect to be less exciting/motivating for the sorts of people I’m most worried about (compared to the sorts of people I most wanted to draw in).
* I tried to persistently and centrally raise concerns about misaligned AI (raising it in two pieces, including [one (guest piece) devoted to it](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#powerful-models-could-get-good-performance-with-dangerous-goals), before I started discussing how soon transformative AI might be developed), and [extensively discussed](https://www.cold-takes.com/making-the-best-of-the-most-important-century/) the problems of overemphasizing “competition” relative to “caution.”
* I [ended the series](https://www.cold-takes.com/call-to-vigilance/) with a piece arguing against being too “action-oriented.”
* I stuck to “passive” rather than “active” promotion of the series, e.g., I accepted podcast invitations but didn’t seek them out. I figured that people with proactive interest would be more likely to give in-depth, attentive treatments rather than low-resolution, oversimplified ones.
I don’t claim to be sure I got all the tradeoffs right. [↩](#fnref3)- There are some papers arguing that AI systems do things *something* like this (e.g., see the “Challenges” section of [this post](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)), but I think the dynamic is overall pretty far from what I’m most worried about. [↩](#fnref4)- E.g., [public benefit corporation](https://www.delawareinc.com/public-benefit-corporation/) [↩](#fnref5)
|
f58d91d7-76e5-4471-b6a8-b36e0e006418
|
trentmkelly/LessWrong-43k
|
LessWrong
|
SingInst bloomberg coverage [link]
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/12/19/BUFU1MD58T.DTL&type=printable
|
429460e9-69e7-4265-9163-ad5381f9c3e7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reference Classes in the Doomsday Argument
I accidentally deleted my article titled "Reference Classes in the Doomsday Argument." There were some really thoughtful replies, and I sincerely apologize to everyone who commented. I'm new here and new to web forums in general, and I thought that I was deleting my saved draft. Aargh!
For those who were involved in the discussion, here is the permalink to the topic.
Edit: Renamed title to make the discussion easier to find (per nickernst's suggestion).
|
2a7b1fcb-01b0-43d8-b443-8ff07f40234c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Which possible AI systems are relatively safe?
Presumably some kinds of AI systems, architectures, methods, and ways of building complex systems out of ML models are safer or more alignable than others. Holding capabilities constant, you'd be happier to see some kinds of systems than others.
For example, Paul Christiano suggests "LM agents are an unusually safe way to build powerful AI systems." He says "My guess is that if you hold capability fixed and make a marginal move in the direction of (better LM agents) + (smaller LMs) then you will make the world safer. It straightforwardly decreases the risk of deceptive alignment, makes oversight easier, and decreases the potential advantages of optimizing on outcomes."
My quick list is below; I'm interested in object-level suggestions, meta observations, reading recommendations, etc. I'm particularly interested in design-properties rather than mere safety-desiderata, but safety-desiderata may inspire lower-level design-properties.
All else equal, it seems safer if an AI system:
* Is more interpretable
* If its true thoughts are transparent and expressed in natural language (see e.g. Measuring Faithfulness in Chain-of-Thought Reasoning)
* (what else?);
* Has humans in the loop (even better to the extent that they participate in or understand its decisions, rather than just approving inscrutable decisions);
* Decomposes tasks into subtasks in comprehensible ways, and in particular if the interfaces between subagents performing subtasks are transparent and interpretable;
* Is more supervisable or amenable to AI oversight (what low-level properties determine this besides interpretable-ness and decomposing-tasks-comprehensibly?);
* Is feedforward-y rather than recurrent-y (because recurrent-y systems have hidden states? so this is part of interpretability/overseeability?);
* Is myopic;
* Lacks situational awareness;
* Lacks various dangerous capabilities (coding, weapon-building, human-modeling, planning);
* Is more corrigible (what lower-level desir
|
54d51e05-17cd-48c9-bcae-dfac6fb92ea8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Austin, TX
Discussion article for the meetup : Austin, TX
WHEN: 20 August 2011 01:30:00PM (-0500)
WHERE: 2222B Guadalupe St Austin, Texas 78705
I'm back after missing three meetups! I hope you'll be there. Once again, at Caffe Medici, starting at 1:30PM.
Discussion article for the meetup : Austin, TX
|
5ad41ceb-7a03-4354-a931-15716df8be84
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Was a PhD necessary to solve outstanding math problems?
My composition teacher in college told me that in some pottery schools, the teacher holds up your pot, examines it, comments on it, and then smashes it on the floor. They do this for your first 100 pots.
In that spirit, this post's epistemic status is SMASH THIS POT.
Previous post: Was a terminal degree ~necessary for inventing Boyle's desiderata?
This is my second post investigating whether a terminal degree is practically ~necessary for groundbreaking scientific work of the 20th century.
Mathematics seems like a great field for outsiders to accomplish groundbreaking work. In contrast to other fields, many of its open problems can be precisely articulated well in advance. It requires no expensive equipment beyond computing power, and a proof is a proof is a proof.
Unlike awards like the Nobel Prize or Fields Medal, and unlike grants, a simple list of open problems established in advance seems immune to credentialism. It's a form of pre-registration of what problems are considered important. Wikipedia has a list of 81 open problems solved since 1995. ~146 mathematicians were involved in solving them (note: I didn't check for different people with the same last name). I'm going to randomly choose 30 mathematicians, and determine whether they got a PhD on or prior to the year of their discovery.
The categories will be No PhD, Partial PhD, PhD, evaluated in the year they solved the problem. In my Boyle's desiderata post, 2/15 (13%) of the inventors had no PhD. I'd expect mathematics to exceed that percentage.
Results:
Robert Connelly: PhD Anand Natarajan: PhD Mattman: PhD Croot: PhD Mineyev: PhD Taylor: PhD Antoine Song: Partial PhD Vladimir Voevodsky: PhD Ngô Bảo Châu: PhD Haas: PhD Andreas Rosenschon: PhD Paul Seymour: PhD (D. Phil) Oliver Kullmann: PhD Shestakov: PhD Merel: PhD Lu: PhD Knight: PhD Grigori Perelman: PhD Haiman: PhD Ken Ono: PhD Ben J. Green: PhD Demaine: PhD Jacob Lurie: PhD Harada: PhD McIntosh: PhD Naber: PhD Adam Parusinski: PhD Atiyah: Ph
|
ccdc69ac-c695-4856-8ce6-1067a73419cc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What "upside" of AI?
Consider, first, what humanity would want some aligned AGI to do, were it obtained? Set aside details, and let's suppose, in the main, we want (not instruct, just want) AGI to "make our lives better." (Too, AGI could "make us more productive," but if what is produced is bad, let us suppose we do not desire it).
But now a problem: AI in general is meant to do what we cannot do - else we would have done it, already, without AI - but to "make things better", that we could very readily have done, already. Therefore, since the "upside" of AI is to make things better as we could not, but we could in fact have made things better, only some impulse in humanity held off improvement - it follows, there is no upside of AI, at least, none we couldn't have procured for ourselves; whatever we would get of AI is no "upside".
That we could have done better follows from considering counterfactuals of a civilization which takes AI alignment seriously, for instance. And we certainly had many opportunities to create such a civilization. The famed "Forty acres and a mule," duly provided; a propounded and enacted "One Big Union," from an unsuppressed IWW; validated efforts at corrections reform on Norfolk Island by Maconochie, or ratification of the Equal Rights Amendment, or widespread adoption of the teachings of Mo Zi, of Emma Goldman - or practical use of Hero's steam engine, or the survival of Archimedes, Abel, and Galois, or -
We've had plenty of opportunities to "make things better". The AI, then, will rather be engaged on a salvage operation, not a mission of optimization. Hence it will have first to un-do - and any such need carries the danger of removing what we might value (perhaps we value most what makes us less than best). Recall too, since we are observably incapable of making ourselves nearer these counterfactuals, nearer some "best", then AI on the current machine learning paradigm, mimicking our data, so mimicking us, therefore is apt to become likewise what is inc
|
4dba8c98-99a8-4486-8d60-24946804e6c7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Anti-epistemology: an explanation
While there are more than a hundred thousand words on LW about the structure of good epistemology, as far as I can tell there is no nuts-and-bolts explanation of the most common anti-epistemology. I will try to rectify this omission, because I think I comprehended it.
The prototypical question of epistemology is some form of experiment, such as "what will I perceive when I pour these two liquids together?". After "what will I perceive" is packed into "what will happen", the question becomes observer-independent, and it is only natural that reality—the thing that answers the question—is itself observer-independent. The terrain would be there even if there were no maps of it. (Maps are intentionally created to predict approximate answers to a small subset of possible questions with much less effort/expense than it would take to answer the question in the terrain itself, by sacrificing the map's ability to answer the vast majority of possible questions correctly, let alone more cheaply than reality.)
In the unholy mirror image, the prototypical question is "who is most popular?". While it is possible for individuals to be mistaken, the answer has the form of a consensus (technically, common knowledge). Consequently, "reality" is inherently observer-dependent; it makes no sense to ask for the consensus of zero maps. There is no mention of the terrain in the theory (since an outright denial would be suspiciously specific), thus to the extent people are unable to compartmentalize away its practical intrusions into life, the stupidity can be overturned.
This would be bad enough in a hunter-gatherer band, but gets much worse in a society much larger than Dunbar's number, due to division of labor. Just as it in no longer possible for everyone to have the maps to answer all practical questions, and there is a niche for experts on subjects, it is no longer possible for everyone to know what society's consensus is on all questions, and there is a niche for experts at subject
|
312d0c87-9de2-4bd5-82a8-1e160412f016
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Safety Culture for AI" is important, but isn't going to be easy
This is a linkpost (to the EA forum version of this post, which is) for a new preprint, entitled "Building a Culture of Safety for AI: Perspectives and Challenges," and a brief explanation of the central points. Comments on the ideas in the post are welcome, but much of the content which clarifies the below is in the full manuscript.
Safety culture in AI is going to be critical for many of the other promising initiatives for AI safety.
1. If people don't care about safety, most safety measures turn into box-ticking. Companies that don't care avoid regulation, or render it useless. That's what happens when fraudulent companies are audited, or when car companies cheat on emissions tests.
2. If people do care about safety, then audits, standards, and various risk-analysis tools can help get them there.
3. Culture can transform industries, and norms about trying to be safe can be really powerful as a way to notice and discourage bad actors.
However, there are lots of challenges to making such a culture.
1. Safety culture usually requires agreement about the risks. We don't have that in AI generally.
2. Culture depends on the operational environment.
1. When people have risks reinforced by always being exposed to them, or personally being affected by failures, they pay more attention. In AI, most risks are rare, occur in the future, and/or affect others more than the people responsible.
2. Most safety cultures are built around routines such as checklists and exercises that deal with current risks. Most AI risks aren't directly amenable to these approaches, so we can't reinforce culture with routines.
3. Cultures are hard to change once they get started.
1. AI gets cultural norms from academia, where few consider risks from their work, and there are norms of openness, and from the startup world, where companies generally want to "move fast and break things."
2. AI companies aren't prioritizing safety over profits - unlike airlines, nuclear powe
|
ec97aa8b-baa4-45f3-bdd9-94d179ffc49b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Prediction] We are in an Algorithmic Overhang, Part 2
In [Prediction] We are in an Algorithmic Overhang I made technical predictions without much explanation. In this post I explain my reasoning. This prediction is contingent on there not being a WWIII or equivalent disaster disrupting semiconductor fabrication.
----------------------------------------
I wouldn't be surprised if an AI takes over the world in my lifetime. The idea makes me uncomfortable. I question my own sanity. At first I think "no way could the world change that quickly". Then I remember that technology is advancing exponentially. The world is changing faster than ever has before and the pace is accelerating.
Superintelligence is possible. The laws of physics demand it. If superintelligence is possible then it is inevitable. Why hasn't we built one yet? There are four[1] candidate limitations:
* Data. We lack sufficient training data.
* Hardware. We lack the ability to push atoms around.
* Software. The core algorithms are too complicated for human beings to code.
* Theoretical. We're missing one or more major technical insights.
We're not limited by data
There is more data available on the Internet than in the genetic code of a human being plus the life experience of a single human being.
We're not (yet) limited by hardware
This is controversial but I believe throwing more hardware at existing algorithms won't bring them to human level.
I don't think we're limited by our ability to write software
I suspect that the core learning algorithm of human beings could be written in a handful of scientific papers comparable to the length and complexity of Einstein's Annus Mirabilis. I can't prove this. It's just gut instinct. If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.
Porting a mathematical algorithm to a digital computer is straightforward. Individual inputs like snake detector circuits can be learned by exist
|
5ccd8049-a0b0-4b98-9478-6baf447ea3ef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What AI Posts Do You Want Distilled?
I'd like to distill AI Safety posts and papers, and I'd like to see more distillations generally. Ideally, posts and papers would meet the following criteria:
* Potentially high-impact for more people to understand
* Uses a lot of jargon or is generally complex and difficult to understand
* Not as well-known as you think they should be (in the AI X-risk space)
What posts meet these criteria?
|
c395aa6b-de35-4a5e-9ffc-8e9d2da3ada1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How many humans could potentially live on Earth over its entire future?
(Or person-years if we assume greatly extended life). How many years could the Earth be habitable for?
|
9a667b11-cd1e-4139-b811-c09f8caba57a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Book Review: Man's Search for Meaning by Victor Frankel
(Oops, I meant this to be a shortform)
"Everything can be taken from a man but one thing: the last of the human freedoms—to choose one’s attitude in any given set of circumstances, to choose one’s own way"
“He who has a why to live for can bear almost any how.” —Friedrich Nietzsche
The core premise of this book is well-known: that in order to survive the harshest and most terrible of circumstances one must have some kind of purpose to keep them going. This book describes the kind of mindset that one would have to adopt to survive in such circumstances, but the question that kept arising in my mind was, "Would it actually be worth it?". Would it actually be worth it to endure the degrading and inhumane conditions of a concentration camp given that it would almost certainly come to nothing? Perhaps if I knew that I was going to survive then I could find some meaning and growth in the suffering, but to go through all that suffering and for it all to be a waste... Or perhaps one could endure for love, but what would this amount to if you were likely to never be reunited, if they could already be dead? Or if one thought they could achieve the kind of good Victor Frankel did, but that lot falls to very few.
Victor Frankel is religious so he can find purpose in this. If there is a God and he he has a purpose, then one has played a role in this divine plan by bearing their suffering with dignity and could further hope to be rewarded in the afterlife. So he would view his life as having been meaningful even if he had perished, but what about the rest of us? Ultimately, I suspect that I may be asking too much out of him. These techniques could still provide a large amount of value, even if we reject his premise that any life is still worth leading, for a life can still be worth leading even if it involves a great deal of suffering.
His view that life is always worth living is expressed in the following quote: "Life asks you the meaning of life; you don't ask life. You ar
|
8f70dcc3-a0a7-4718-a1d9-ced591cecd17
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Type-safeness in Shell
Since writing the post on a hypothetical hull language as an alternative to shell I cannot stop thinking about the shortcomings of shell.
And one think that comes to mind over and over is type-safeness. Shell treats everything as a string and that's the source of both its power and its poor maintainability.
So when I ask whether shell can be improved, the question is actually more subtle: Can it be improved without compromising its versatility? Can we, for example, be more type-safe without having to type Java-like stuff on the command line? Without sacrificing the powerful and dangerous features like string expansion?
I mean, you can write shell-like scripts in Python even today and use type hints to get type safeness. But in real world this practice seems to be restricted to writing more complex programs, programs that require actual in-language processing, complex control flow, use of libraries and so on. Your typical shell script which just chains together a handful of UNIX utilities — no, I don't see that happening a lot.
To put it in other words, different "scripting languages" managed to carve their own problem spaces from what once used to be the domain of shell, but almost none of them attacked its very core use case, the place where it acts as a dumb glue between stand-alone applications.
But when writing shell scripts, I observe that I do have a type system in mind. When I type "ls" I know that an argument of type "path" should follow. Sometimes I am even explicit about it. When I save JSON into a file, I name it "foo.json". But none of that is formalized in the language.
And in some way, albeit in a very hacky one, shell is to some extent aware of the types. When I type "ls" and press Tab twice a list of files appears on the screen. When I type "git checkout" pressing Tab twice results in a list of git branches. So, in a way, shell "knows" what kind of argument is expected.
And the question that's bugging me is whether the same can be done in a mo
|
4818b4af-6fb4-4198-9495-0f8bd2db8bbd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Offer of collaboration and/or mentorship
UPDATE: Offer of mentorship is closed, since I received sufficiently many candidates for now. Offer of collaboration remains open for experienced researchers (i.e. researchers that (i) have some track record of original math / theoretical compsci research, and (ii) are able to take on concrete open problems without much guidance).
----------------------------------------
I have two motivations for making this offer. First, there have been discussions regarding the lack of mentorship in the AI alignment community, and that beginners find it difficult to enter the field since the experienced researchers are too busy working on their research to provide guidance. Second, I have my own research programme which has a significant number of shovel ready open problems and only one person working on it (me). The way I see it, my research programme is a very promising approach that attacks the very core of the AI alignment problem.
Therefore, I am looking for people who would like to either receive mentorship in AI alignment relevant topics from me, or collaborate with me on my research programme, or both.
Mentorship
I am planning to allocate about 4 hours / week to mentorship, which can be done over Skype, Discord, email or any other means of remote communication. For people who happen to be located in Israel, we can do in person sessions. The mathematical topics in which I feel qualified to provide guidance include: linear algebra, calculus, functional analysis, probability theory, game theory, computability theory, computational complexity theory, statistical/computational learning theory. I am also more or less familiar with the state of the art in the various approaches other people pursue to AI alignment.
Naturally, people who are interested in working on my own research programme are those who would benefit the most from my guidance. People who want to work on empirical ML approaches (which seem to be dominant in OpenAI, DeepMind and CHAI) would benefit somewhat
|
fcb69c90-3005-4f43-a871-b2c9e2084ecc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Impact, agency, and taste
I’ve been thinking recently about what sets apart the people who’ve done the best work at Anthropic.
You might think that the main thing that makes people really effective at research or engineering is technical ability, and among the general population that’s true. Among people hired at Anthropic, though, we’ve restricted the range by screening for extremely high-percentile technical ability, so the remaining differences, while they still matter, aren’t quite as critical. Instead, people’s biggest bottleneck eventually becomes their ability to get leverage—i.e., to find and execute work that has a big impact-per-hour multiplier.
For example, here are some types of work at Anthropic that tend to have high impact-per-hour, or a high impact-per-hour ceiling when done well (of course this list is extremely non-exhaustive!):
* Improving tooling, documentation, or dev loops. A tiny amount of time fixing a papercut in the right way can save hundreds of users hours of debugging or schlepping through inefficient workflows.
* Identifying promising research directions. Things like character and computer use started off as a fraction of one person’s time, taking a bet that some technique applied to some problem was going to work really well, and ended up being a major influence on Anthropic’s research direction.
* System design. This tends to be a small part of the overall time to execute a project, but when done well, it can make the final system way better and save lots of work.
* Collecting and digging into data. It’s almost always the case that, if we start looking at some new source of data that we didn’t previously have, we discover huge problems opportunities for improvement, and it changes our prioritization a lot.
* Interestingly, even versions of this work that seem very rote are often extremely high leverage. For example, you might think that a task like “look at 100-1000 individual data points for anomalies” could be delegated to a junior hire, but in fac
|
348f699b-e503-4951-8c79-2b2475dad6fc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Epistemic Circularity
Note: I wrote a better article about this; you should read that instead of this stub: The Problem of the Criterion.
I've previously argued for the existence of what I've called a "free variable" in epistemology that forces a choice between ways of knowing because no one way of knowing (system of epistemology or simply an epistemology) can be both complete and consistent. In the process of working on a current project and not wanting to have to rederive anything someone else has already argued for in academic literature, I discovered this feature already has a name and has been written about: epistemic circularity.
I find it sort of surprising we've not addressed this more within the LW community, although it's perhaps less surprising than might otherwise be expected given LW's positivist leanings. I don't have much to say on epistemic circularity at the moment, although I do consider it critical to my worldview and a crux of my thinking about philosophical conservatism for alignment research, but I did at least want to bring some wider attention to a concept that, to my recollection, we've ignored as a community.
|
cc652e72-b80c-4aca-9ee7-92d847588ead
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How Safe are Uploads?
I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.
I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
Suppose human society has some hope of designing FAI. Then I strongly suspect that a community of uploads have at least as good a chance of designing FAI. If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI. Moreover, if emulated brains eventually outproduce us signfiicantly, then they have a higher chance of designing an FAI before something else kills them. The main remaining question is how safe an upload would be, and how well an upload-initiated singularity is likely to proceed.
There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.
The main factor contributing to the risk of an upload-initiated singularity is that uploads already have access to uploads. It is possible that uploads will self-modify unsafely, and that this may be (even relatively) easier than for existing humans to develop AI. Is this the crux of the argument aga
|
b9d850ca-08a6-40a5-86d4-53436a8d55dc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
You can't beat a troll by predicting it.
My trollability post showed that any computable probability distribution on logic which updates on proofs in a Bayesian way can in principle be manipulated into nearly-arbitrary beliefs by an adversary. (At least, this is true in my GLS-based treatment, and in a version which updates on theorems directly rather than on their provability.) Here, I sketch a result which suggests that it's fairly easy to do so, blocking some potential routes around the problem. I will use notation and concepts from the original trollability post.
First, note that corollary 1 does not depend on the odds ration 1/2 which Theorem 1 implies we can obtain. A weaker troll might not have the computational power to search for proofs which so much leverage over our beliefs. All the troll needs is a sequence of theorems tn such that the product of the likelihood ratios P(□tn|◊x)/P(□tn|¬◊x) goes to zero. (Note: what Theorem 1 establishes is exactly that we can always find t so that this likelihood ratio is less than or equal to 1/2.) The likelihood ratio doesn't even need to be bounded away from one. It could be that each new t is able to reduce the probability less and less (meaning the likelihood ratio is closer and closer to one), but the overall effect is still to modify the probability of x arbitrarily. This is guaranteed if the product of the ratios converges to zero.
(Analogously, we would talk about this product converging to infinity for corollary 2. I'll deal just with the case of driving probabilities down, here; as in the previous post, the idea is that if we can't keep probabilities from being driven arbitrarily down, we also cannot keep them from being driven arbitrarily up, so we get non-convergence.)
It seems natural to hope, then, that an agent could avoid manipulation by predictable trolls. If the agent is predicting the troll quite accurately, then it must be placing high probability on □t ahead of time; so, the likelihood ratio must be close to one. If the predictions conve
|
1d0139ef-1f92-4e1c-83ad-e29ff79601d5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Illusion of Simplicity: Monetary Policy as a Problem of Complexity and Alignment
“There is nothing scarier than ignorance put into action” — Goethe
A common mistake in economic analysis is to consider that any type of “desequilibrium” in an economy can be solved through a simple government intervention. However, this reasoning ignores that government actions can also be flawed. Government failures generally arise as a result of epistemic or organizational limitations within the State.
With regard to monetary policy, one justification that can be given for its implementation is that the monetary authority could conduct macroeconomic policies to stabilize fluctuations in economic indicators; such as GDP, inflation and exchange rate. However, the authority faces a constraint on the ability to conduct this policy optimally. To understand this restriction, it is necessary to understand the relationship between economic policy and output (GDP). It is possible to say that the effects of an economic policy on the output of an economy can be expressed as:
Yt+1=Yt +β
Where Yt+1 is the product in the presence of a policy, Yt is the product of the period prior to the implementation of that policy and β is the measure of the effects of the economic policy on the combined product at time t . Since we seek to know whether or not a given policy can stabilize an economy, the most appropriate measure will be the standard deviation of products around their means (Friedman 1953). The greater the deviation for a given term, the greater its instability. Since the monetary authority seeks a policy whose effects stabilize the income of an economy at an optimal level, then it is acceptable to say that it seeks a scenario where the deviation of Yt+1 is smaller than the variation of Yt given the measure of the β policy effects. Formalizing it is found that:
σY2t+1 = σY2t + σβ2 + (2ωYβ) σYt σβ
Where ω will indicate the correlation coefficient betwee
|
f16d22ea-006c-4016-b01e-dfc9a32a93ee
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Asches to Asches
[Content note: fictional story contains gaslighting-type elements. May induce Cartesian skepticism]
You wake up in one of those pod things like in The Matrix. There’s a woman standing in front of you, wearing a lab coat, holding a clipboard.
“Hi,” she says. “This is the real world. You used to live here. We erased your memories and stuck you in a simulated world for a while, like in The Matrix. It was part of a great experiment.”
“What?” you shout. “My whole life, a lie? How dare you deceive me as part of some grand ‘experiment’ I never consented to?”
“Oh,” said the woman, “actually, you did consent, in exchange for extra credit in your undergraduate psychology course.” She hands you the clipboard. There is a consent form with your name on it, in your handwriting.
You give her a sheepish look. “What was the experiment?”
“You know families?” asks the woman.
“Of course,” you say.
“Yeah,” says the woman. “Not really a thing. Like, if you think about it, it doesn’t make any sense. Why would you care more for your genetic siblings and cousins and whoever than for your friends and people who are genuinely close to you? That’s like racism – but even worse, at least racists identify with a group of millions of people instead of a group of half a dozen. Why should parents have to raise children whom they might not even like, who might have been a total accident? Why should people, motivated by guilt, make herculean efforts to “keep in touch” with some nephew or cousin whom they clearly would be perfectly happy to ignore entirely?”
“Uh,” you say, “not really in the mood for philosophy. Families have been around forever and they aren’t going anywhere, who cares?”
“Actually,” says the woman, “in the real world, no one believes in family. There’s no such thing. Children are taken at birth from their parents and given to people who contract to raise them in exchange for a fixed percent of their future earnings.”
“That’s monstrous!” you say. “When did this happen? Weren
|
1a994a5a-28ec-4f21-b9db-89398167a1c5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Saint Petersburg, Russia
Discussion article for the meetup : Saint Petersburg, Russia
WHEN: 27 October 2013 04:00:00PM (+0400)
WHERE: Санкт-Петербург, м. Технологический Институт, ул.1-я Красноармейская, дом 15
Come to the (probably) first in a long time meetup in St.Petersburg!
It will be located in cafe "PMG" - for detailed description please read mailing list announcement. At least for the first time there will be no or little moderation - just socialising, getting to know each other, unstructured discussion. We will be looking at our interests and how meetups can meet them.
Please look in mailing list for more information:https://groups.google.com/forum/#!forum/less-wrong-saint-petersburg
Or contact me on 8 911 843 56 44. Every day from 18-00 to 00-00.
Discussion article for the meetup : Saint Petersburg, Russia
|
3be2cec7-0924-4d12-97e6-75bf913ecc4a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
An Apology is a Surrender
[Epistemic Status: This post is aimed at people who've hurt loved ones by trying to fast talk their way out of apologies; remember the law of equal and opposite advice. A different version of this post, with examples drawn from responses to #MeToo, previously appeared on my blog.]
When news of Meltdown and Spectre broke, Intel released one of the worst non-apologies I've ever seen.
They lead with "Intel believes these exploits do not have the potential to corrupt, modify or delete data", which is only a consolation if you don't care about any of your passwords being stolen by hackers. They follow that one up with 'Recent reports that these exploits are caused by a “bug” or a “flaw” and are unique to Intel products are incorrect', which is baffling, until you realize that they're actually using conditional logic in a press release.
They end the train wreck with "Intel believes its products are the most secure in the world", which makes me worried that Intel management has lost touch with reality.
I might be old fashioned, but when a hardware manufacturer ships a security flaw that's this big, with a fix that might severely impact performance, I expect an apology, not a rearguard.
What I mean is: as long as you're defending yourself, you aren't internalizing the consequences of your actions. For as long as you keep fighting, you get to keep believing that maybe consequences won't materialize. Maybe you'll say the right thing; maybe the consequences will disappear.
An apology accepts consequences.
Imagine yourself arguing with someone you've hurt. Imagine the wiggle words and excuses you might use. Imagine how hard it is to be quiet, to listen without arguing or defending yourself when someone tells you that you've hurt them.
Doing that, despite the voice inside you telling you to fight, telling you to try and get away clean, that's scary; that's difficult. That's a surrender.
Of course, surrendering is just the first step. It's best if you back it up with som
|
87b1357c-637b-4b89-9d18-59105c7ff293
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Weekly LW Meetups: Brussels, Chicago, Dorset, Fort Lauderdale, London, Melbourne, Pittsburgh, Shanghai, Sydney, Twin Cities
There are upcoming irregularly scheduled Less Wrong meetups in:
* Brussels meetup: 14 April 2012 11:00AM
* First Dorset UK Meetup: 14 April 2012 03:00PM
* Fort Lauderdale: 14 April 2012 06:00PM
* Shanghai Less Wrong Meetup: 15 April 2012 10:36PM
* Twin Cities, MN (for real this time): 15 April 2012 01:00PM
* Sydney meetup - Biased pandemic and other games: 17 April 2012 07:30PM
* Pittsburgh - Presentation on Anthropics: 20 April 2012 06:00PM
* Rome LessWrong Meetup: 21 April 2012 07:00PM
* Longmont Sparkfun Soldering Competition Field Trip: 28 April 2012 11:00AM
* Graz Meetup: 28 April 2012 11:21PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Weekly Chicago Meetups: 14 April 2012 01:00PM
* London: 15 April 2012 02:00PM
* Melbourne social meetup: 20 April 2012 07:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Chicago, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to po
|
3acf0c8b-dd98-41d2-8dc0-651348b8826e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Comparing Sparse Autoencoder Features from Individual and Combined Datasets
Abstract
With the growing prevalence of large language models (LLMs), explainable AI (XAI) is a concern for all. A growing field of XAI is mechanistic interpretability. One method of mechanistic interpretability is the use of sparse autoencoders (SAEs) to extract the features of LLMs. The effect of text datasets used to train SAEs on the features is a topic of ongoing study. Here we compare SAE features from two text datasets trained separately and then trained combined, in two LLMs. The text datasets come from lower and higher represented group topics. We find that the text influences the features that emerge by tending to split features, rather than aggregating features. In addition, the LLMs that are used to label features also have an influence on the feature labels, where the higher represented group is underrepresented in the features.
Introduction
LLMs have become ubiquitous in modern life, creating the need for more advanced XAI.[1][2] Mechanistic interpretability aims to reverse engineer complex neural networks.[3][4] An issue with LLMs is that their neurons are not as interpretable as other neural networks such as CNNs--the neurons of LLMs are more polysemantic.[5][6] However, these polysemantic neurons can still achieve a variety of tasks, since inputs tend to be sparse, so only a limited number of "features" of language are activated at a time in the models. Training SAEs on LLM activations attempts to create monosemantic features of these LLMs.[7][8]
SAEs as XAI for LLMs tend to be trained on activations from layers of the LLMs. The SAEs reconstruct the activations during training. Normally, autoencoders reduce the input to fewer latent features; however, with SAEs, an expansion factor (R) is multiplied by the input dimensions (din) to create the number of hidden dimensions (dhid) in the SAE. The activations of the hidden dimensions of the SAEs are the attempted monosemantic feature values. These features can then be manipulated, such as by clamping
|
56713faa-81d5-408b-a664-47247b1a4b46
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Thoughts on Zero Points
There’s a cool concept I’ve been thinking about. I first heard of it when reading Jesse Schell’s book “The Art of Game Design”. (Fun fact: Jesse Schell was my professor’s professor, aka my grand-professor.)
Then I heard of it again in the LessWrong post “Choosing the Zero Point”. Having been exposed to it twice, I now see it everywhere. I’m not sure how to describe it though, so I’ll just throw a bunch of examples at you:
1. Jesse Schell: the developers of World of Warcraft wanted players to play just a little each day. In the game, you gain “experience points” as you play. So they decided you would get less experience points once you had already been playing for half an hour. Of course, players hated this. But instead of actually changing the system, the developers just reduced the baseline amount of experience points you got, and then said that you got a bonus during the first half hour.
1. In the words of Chris Remo from the Idle Thumbs podcast, “It’s EXACTLY the same as it was before, except NOW everyone is like ‘Fuck yeah, Blizzard, this is exactly what I want!’"
2. Intensionally, it feels very different to get a bonus during the first half hour than it does to get penalized beyond the first half hour.
3. But extensionally, the two systems are the exact same. You get more experience points during the first half hour of each day than during the rest of the time you play.
4. P.S. the above is from memory and may not be 100% accurate to how it happened IRL.
2. LessWrong: some people portray eating meat as evil, and not eating meat as the bare minimum to be a decent person. But it may be more persuasive to portray eating meat as neutral, and to portray not eating meat as an insanely awesome opportunity to do a massive amount of good.
1. Intensionally, they are very different. Finding out that you’ve actually been perpetuating a great evil and must drastically change your behavior and give up something you enjoy to just get back to the baseli
|
d49bc7ec-8fc9-4009-abbb-383bdf97c5b8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Are we inside a black hole?
Epistemic status: Wildly speculative. I am not a physicist. It may quickly become apparent that I have no idea what I'm talking about. Nevertheless, I found the thought experiment interesting.
The set of models known as superstring theory posits the existence of multiple compactified dimensions in addition to the usual three spatial and one temporal dimension, which I'll call 3+1 space. Here I will suggest that an observer approaching a black hole might perceive 3+1 space collapsing into a 2+1 space. By extension, our 3+1 space may have resulted from gravitational collapse in a higher dimensional space (possibly more than once).
Consider an observer approaching a black hole. If the observer is well clear of the event horizon, the observer has three degrees of spatial freedom. We can describe its position in space using the common spherical coordinates of r, phi, and theta. R is the distance between the observer and the center of the black hole. Phi and theta are two angles equivalent to latitude and longitude on Earth.
As the observer approaches the black hole, the gravitational force Fg increases. The forces acting on the observer in the radial dimension can be summarized as Fg+X, where X is all other forces acting on the observer in the radial dimension. As the observer approaches the singularity, Fg grows without bound and X becomes increasingly irrelevant.
At this point, the observer's position in the radial dimension r is, for all practical purposes, a function of time. The observer has lost all capability to affect its motion in the r dimension. Any length the observer may have had along the r dimension will be stripped away in the process of spaghettification. The observer would no longer be able to perceive r and time as separate dimensions. The point r=0 would be, to the observer, both the center of the universe and the infinite future.
Two spatial dimensions would remain, phi and theta, and the observer would be able to freely move acr
|
12a572af-14e5-4891-a8d8-04e3288537a7
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Why do People Think Intelligence Will be "Easy"?
In many discussions I've hard with people around AI takeoff, I've frequently encountered a belief that intelligence is going to be "easy".
To quantify what I mean by "easy", many people seem to expect that the *marginal* returns on cognitive investment (whether via investment of computational resources, human intellectual labour, cognitive reinvestment or general economic resources) will not be diminishing, or will diminish gracefully (i.e. at a sublinear rate).
(Specifically marginal returns around and immediately beyond the human cognitive capability frontier.)
I find this a bit baffling/absurd honestly. My default intuitions lean towards marginal returns to cognitive reinvestment diminishing at a superlinear rate, and my arm chair philosophising so far (thought experiments and general thinking around the issue) seem to support this intuition.
A couple intuition pumps:
* 50%, 75%, 87.5%, 93.75%, ... are linear jumps in predictive accuracy (one bit each), but returns seem to diminish at an exponential rate
+ On the other hand 6.25%, 12.5%, 25%, 50% represent the same linear jumps, but this time with returns growing at an exponential returns
+ This suggests that the nature of returns to cognitive investment might exhibit differing behaviour depending on where in the cognitive capabilities curve you are
- Though I've not yet thought about how this behaviour generalises to other aspects of cognition separate from predictive accuracy
* Apriori, I'd expect that given how much human dominance is almost entirely dependent on our (collective) intelligence, our evolution would have selected strongly for intelligence until it met diminishing returns to higher intelligence
+ That is, we should expect that evolution made us as smart as was beneficial given its constraints:
- Calorie requirements of larger brains
- Energy budget
- Size of the human birth canal
- Slight variations on existing cognitive architecture
- Etc.
+ [I think the reported higher incidence of some neuronal issues/disorders among higher IQ human subpopulations (e.g. Ashkenazi Jews)](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/#:~:text=Doctors%20have%20long%20noted%20that%20Ashkenazi%20Jews%20are%20uniquely%20susceptible%20to%20various%20genetic%20diseases) may be an argument in favour of evolution having met said marginal returns
+ Perhaps [the Flynn Effect](https://en.wikipedia.org/wiki/Flynn_effect) is an argument against this
Another way to frame my question this is that there will obviously be diminishing marginal returns (perhaps superlinearly diminishing) at some point; why are we confident that point is far in front of the peak of human intelligence?
|
fc2fdc92-595e-4a9e-9075-f58a323b80be
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Self-administered EMDR without a therapist is very useful for a lot of things!
EMDR (Eye Movement Desensitization and Reprocessing) therapy is a structured therapy that encourages the patient to focus briefly on a traumatic memory while simultaneously experiencing bilateral stimulation (typically eye movements, but also tones or taps), which is associated with a reduction in the vividness and emotion associated with the traumatic memories.
EMDR is usually done with a therapist. However, you can also just do self-administered EMDR on your own - as often and whenever you want without any costs! Most people don't seem to know this great "do it on your own" option exists - I didn't. So my main goal with this post is to just make you aware of the fact that: "Hey, there's this great therapeutic tool called EMDR, and you can just do it!". And bearing some important caveats in mind, I highly recommend it. I've been doing emotional work like extended meditation retreats, Internal Family Systems (IFS), Ideal Parent Figures (IPF), Focusing, etc., for a long time, but self-administered EMDR has actually been one of the most helpful techniques of them all to me!
Also, I've found EMDR helpful for a much broader set of problems than the official EMDR protocol implies. You can use it for anything "trauma" in the broadest sense of the word - any unhelpful emotional schema; any strongly negatively charged emotion, belief, or memory that is kind of stuck in unconsciousness.
EMDR also doesn't have to be this complicated thing at all. I don't think you need to know more than there is in this blog post. You can totally 80/20 this, i.e. get 80% of the benefits from just 20% of the knowledge/effort.
How to do self-administered EMDR
Read here how to do EMDR with a therapist: https://www.helpguide.org/articles/therapy-medication/emdr-therapy.htm
For self-administered EMDR, you simply do the same, just without the therapist! So to summarize the most important steps of the official protocol, you:
* Identify:
* A specific scene or picture that best represents
|
6b776352-e633-47ce-9b97-f4842d38b816
|
trentmkelly/LessWrong-43k
|
LessWrong
|
David Chalmers' "The Singularity: A Philosophical Analysis"
David Chalmers is a leading philosopher of mind, and the first to publish a major philosophy journal article on the singularity:
Chalmers, D. (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17:7-65.
Chalmers' article is a "survey" article in that it doesn't cover any arguments in depth, but quickly surveys a large number of positions and arguments in order to give the reader a "lay of the land." (Compare to Philosophy Compass, an entire journal of philosophy survey articles.) Because of this, Chalmers' paper is a remarkably broad and clear introduction to the singularity.
Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the singularity seriously.
Below is a CliffsNotes of the paper for those who don't have time to read all 58 pages of it.
THE SINGULARITY: IS IT LIKELY?
Chalmers focuses on the "intelligence explosion" kind of singularity, and his first project is to formalize and defend I.J. Good's 1965 argument. Defining AI as being "of human level intelligence," AI+ as AI "of greater than human level" and AI++ as "AI of far greater than human level" (superintelligence), Chalmers updates Good's argument to the following:
1. There will be AI (before long, absent defeaters).
2. If there is AI, there will be AI+ (soon after, absent defeaters).
3. If there is AI+, there will be AI++ (soon after, absent defeaters).
4. Therefore, there will be AI++ (before too long, absent defeaters).
By "defeaters," Chalmers means global catastrophes like nuclear war or a major asteroid impact. One way to satisfy premise (1) is to achieve AI through brain emulation (Sandberg & Bostrom, 2008). Against this suggestion, Lucas (1961), Dreyfus (1972), and Penrose (1994) argue that human cognition is not the sort of thing that could be emulated. Chalmers (1995; 1996, chapter 9) has responded to these criticisms at length. Briefly, Chalmers notes that even i
|
01bc5f54-85ea-4541-a5fc-0c2bbdbe2ce3
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What is "hedonium"?
**Hedonium** (also known as **orgasmium**) is a homogeneous substance with limited consciousness, which is in a constant state of supreme bliss. An AI programmed to "maximize happiness" might simply tile the universe with hedonium. Some who believe this consider it a good thing; others do not. Those who do not, use its undesirability to argue that not all terminal values reduce to "happiness" or some simple analogue. Hedonium is the [hedonistic](https://www.lesswrong.com/tag/hedonism) [utilitarian](https://www.lesswrong.com/tag/utilitarianism)'s version of [utilitronium](https://www.lesswrong.com/tag/utilitronium).
Blog posts
----------
* [Prolegomena to a Theory of Fun](http://lesswrong.com/lw/wv/prolegomena_to_a_theory_of_fun/)
* [In Praise of Boredom](http://lesswrong.com/lw/xr/in_praise_of_boredom/)
* [Are pain and pleasure equally energy-efficient?](https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)
See also
--------
* [Quote from *Superintelligence*](https://www.goodreads.com/quotes/1413237-consider-an-ai-that-has-hedonism-as-its-final-goal)
* [Fun theory](https://www.lesswrong.com/tag/fun-theory)
* [Complexity of value](https://www.lesswrong.com/tag/complexity-of-value)
* [Utilitronium](https://www.lesswrong.com/tag/utilitronium)
|
8a5e4bce-6b4c-4a90-9efc-a32becd10fc9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Where I currently disagree with Ryan Greenblatt’s version of the ELK approach
Context: This post is my attempt to make sense of Ryan Greenblatt's research agenda, as of April 2022. I understand Ryan to be heavily inspired by Paul Christiano, and Paul left some comments on early versions of these notes.
Two separate things I was hoping to do, that I would have liked to factor into two separate writings, were (1) translating the parts of the agenda that I understand into a format that is comprehensible to me, and (2) distilling out conditional statements we might all agree on (some of us by rejecting the assumptions, others by accepting the conclusions). However, I never got around to that, and this has languished in my drafts folder too long, so I'm lowering my standards and putting it out there.
The process that generated this document is that Ryan and I bickered for a while, then I wrote up what I understood and shared it with Ryan, and we repeated this process a few times. I've omitted various intermediate drafts, on the grounds that sharing a bunch of intermediate positions that nobody endorses is confusing (moreso than seeing more of the process is enlightening), and on the grounds that if I try to do something better then what happens instead is that the post languishes in the drafts folder for half a year.
(Thanks to Ryan, Paul, and a variety of others for the conversations.)
Nate's model towards the end of the conversation
Ryan’s plan, as Nate currently understands it:
* Assume AGI is going to be paradigmatic, in the sense of being found by something roughly like gradient descent tuning the parameters in some fixed architecture. (This is not intended to be an argument for paradigmaticity; attempting to align things in the current paradigm is a good general approach regardless (or so Nate understands Ryan to claim).)
* Assume further that Earth's first AGIs will be trained according to a process of our choosing. (In particular, it needs to be the case that AGI developers can train for more-or-less any objective they want, wit
|
1f2f3d94-ac00-4dbb-980e-e366c5836e63
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Conjunction Fallacy
The following experiment has been slightly modified for ease of blogging. You are given the following written description, which is assumed true:
> Bill is 34 years old. He is intelligent, but unimaginative, compulsive, and generally lifeless. In school, he was strong in mathematics but weak in social studies and humanities.
No complaints about the description, please, this experiment was done in 1974. Anyway, we are interested in the probability of the following propositions, which may or may not be true, and are not mutually exclusive or exhaustive:
> A: Bill is an accountant.
> B: Bill is a physician who plays poker for a hobby.
> C: Bill plays jazz for a hobby.
> D: Bill is an architect.
> E: Bill is an accountant who plays jazz for a hobby.
> F: Bill climbs mountains for a hobby.
Take a moment before continuing to rank these six propositions by probability, starting with the most probable propositions and ending with the least probable propositions. Again, the starting description of Bill is assumed true, but the six propositions may be true or untrue (they are not additional evidence) and they are not assumed mutually exclusive or exhaustive.
In a very similar experiment conducted by Tversky and Kahneman (1982), 92% of 94 undergraduates at the University of British Columbia gave an ordering with A > E > C. That is, the vast majority of subjects indicated that Bill was more likely to be an accountant than an accountant who played jazz, and more likely to be an accountant who played jazz than a jazz player. The ranking E > C was also displayed by 83% of 32 grad students in the decision science program of Stanford Business School, all of whom had taken advanced courses in probability and statistics.
There is a certain logical problem with saying that Bill is more likely to be an account who plays jazz, than he is to play jazz. The conjunction rule of probability theory states that, for all X and Y, P(X&Y) <= P(Y). That is, the probability tha
|
2dce403d-6bf8-498c-8209-e84ef6a024ea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Small Talk is Good, Actually
You all liked the last two, so I'm going to keep it going so long as I have ideas and time. This is another casual post in the spirit of what I wish someone had told me when I was younger.
Some people really hate small talk. I never really did. Mostly small talk just confused me or I barely registered the difference between small talk and big talk. So I never really cared about it one way or another.
Then I moved to the Bay Area and lived with a bunch of rationalists and found out some people really hate small talk.
I remember one time carpooling to a party. The party was in Palo Alto and we were driving down from Berkeley. As we started to cross the Dumbarton bridge we got on the topic salt flats[1]. After a couple minutes one of my fellow passengers cried out:
> enough of this small talk! i could have this conversation with anyone! we're all thoughtful, interesting people! surely we can talk about something more interesting!
And then sat in silence for a couple minutes trying to come up with a good enough topic of conversation.
The standard arguments against small talk go like this:
* Small talk is too shallow. Only deeper conversations are worth having.
* Small talk is boring. It doesn't convey new information.
* Small talk is fake and full of bullshitting and hyperbole as people try to make themselves look good.
* Small talk asks people to casually share too much about their personal lives.
* Small talk demands people be constantly ready to say mildly interesting things.
People value different things, and I'm sure some people legit don't value small talk and there's no convincing them. But our values are also informed by our skills. Sometimes what we think is a deeply held value is actually just a defense mechanism because we aren't good at a skill. So in this case, opposing small talk is sometimes the result of:
* being introverted and overloaded and not wanting to talk at all,
* being socially nervous or shy,
* having had bad experiences makin
|
5d455f33-c0b7-45f6-b704-b7f5d480e443
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Illiteracy in Silicon Valley
Sam Altman xeeted on Friday, "i think this is gonna be more like the renaissance than the industrial revolution."
The prediction, of course, is following to the common misconception of history in which the renaissance was a glorious rebirth of classical high civilization. The truth, really, is that Humanism was born out of Europe's greatest mass mortality where a half to two thirds of the population of the cities died off in repeated waves of bubonic plague. It was an absolutely terrible moment, and I think most people would greatly prefer to live even in the early industrialized era.
I'm not going to twist his words any further along this "well achsyhually," line, and I fully understand that he is vaguely trying to say that AI will liberate people into some fantasy renn-faire rather than put them into an Upton Sinclair industrial nightmare. But my true purpose is to treat Altman's ignorance as a symptom of a truly despicable trend in Silicon Valley and among programmers in general. Dunning-Kruger discourse is very overplayed and something I almost never invoke, but in the case of software engineers crossing into the humanities, it's far too applicable.
It is so tempting to take one's limited knowledge of History, apply some logical system, and make conclusions about trends of the past or the future. It's considered total common sense that humanity is on an upwards climb towards greater knowledge and understanding. However, the actual story is much more complicated, and the common narrative of progress, what's called Whiggish history, is not what happened to our humanity. In fact, at the beginning of the modern era, the 1500s, the changes in play provoked persecutions, religious wars, witchcraft crazes, heresy inquisitions, and the continuous growth of state power in a process which reached a crescendo in the legendary wars and mass murders of the 20th century. Indeed, if I presumed Altman had a college-level understanding of European history, I would take his xe
|
c939e2ee-4b53-4a01-b3c8-d5fc26463b4b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Gentle Introduction on How to Use Anki to Improve Your Memory
This article was initially published in Superpowered Self, but after reading, and very much enjoying, TurnTrout's Lessons I've Learned from Self-Teaching I figured people would find this introduction I've written to Anki for complete beginners valuable.
Where would you be in life if you did not forget?
You would have done better in school, for starters. Instead of turning in your bed unable to sleep terrified of the exam coming the next day, you would soundly sleep with the knowledge that you know everything you need to know to ace the exam. And ace it you would indeed.
You would have spent fewer hours studying. How many times have you opened the textbook only to find that you’ve forgotten all that you’d studied the day before? If you did not forget things then those hours you spent studying would always amount to something, instead of leaving you feeling like you’re swimming against the current. School might even have been fun if you did not forget.
What would your career look like if you did not forget?
Forgetfulness affects us all. There is no one that has not grappled with this problem before. Our lives would be better if we did not forget.
Unfortunately, forgetting is inescapable. There is no such thing as a perfect memory. I am not here to sell you on a magic pill that will turn you into Bradley Cooper in the movie Limitless.
However, that doesn’t mean there aren’t things you can do to massively reduce the speed at which you forget things, because there are.
Science has known about what it takes to get memories to stick around in your memory for a long time. It has known about it for a while now, in fact. It’s just that it has done a terrible job so far at making sure that you know about it, you whose life would massively benefit from that knowledge.
My intent for writing this is to correct these wrongs and introduce you to spaced repetition, the more than established method that will put you in control of your memory once and for all, and Anki, the s
|
717beefb-044b-49ce-afb1-2487d92c2c98
|
trentmkelly/LessWrong-43k
|
LessWrong
|
RAISE AI Safety prerequisites map entirely in one post
After reflecting on recent criticism, all RAISE projects have been discontinued, and a postmortem is in our plans. (Update: the postmortem is here) One of those projects was the AI Safety prerequisites online course originally announced here. We're sharing the curriculum here in almost plaintext format so that people can easily find and access it. There is also a GDoc here. (Note: two other products of RAISE are series of lessons on IRL and IDA. They don't have such neatly formed curriculums, and they are still accessible as online lessons.)
It was supposed to be a guide for helping people who want to get into AI safety research. It contains only foundations of math topics (Logic and proof, ZF(C) Set theory, Computability theory to be precise), which are more useful for agent foundations stuff and not useful for machine learning stuff. It was planned to be extended to cover more topics, but that never happened.
How to use this
The main path contains 20 levels. It is the recommended path through the curriculum. Its levels are actually short sequences of levels from the three other paths.
To see what textbooks are required to study a path, see its beginning. Computability theory and set theory paths require two paid textbooks.
13 out of 20 levels of the main path are covered by our online course, which is free (but still requires paid textbooks). To use the online course, register here. You might prefer to use it instead of the text below because it provides more sense of progress, contains solutions to the exercises, has some writing mistakes fixed, maybe feels like a less tedious thing, and provides some additional exercises which we don't think are important.
Credits
The curriculum was made by Erik Istre and Trent Fowler. People who created the online course are: Philip Blagoveschensky, Davide Zagami, Toon Alfrink, Hoagy Cunningham, Danilo Naiff, Lewis Hammond. Also these people contributed: Jacob Spence, Roland Pihlakas. Also, thanks to Grasple for providin
|
345a5ef4-2909-46a0-9e98-ec6dec686488
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Using LLM's for AI Foundation research and the Simple Solution assumption
Current LLM based AI systems are getting pretty good at maths by writing formal proofs in Lean or similar. https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/
So, how can we use these models to help align AI?
The Simple Alignment Solution Assumption states that many problems in alignment, for example corrigibility or value learning, have simple solutions. I mean "simple" in the sense that logical induction is a simple solution. That expected utility maximization is. That causal models are. This is the sort of thing we are searching for.
Under the Simple Alignment Solution Assumption, the solution is.
* Unique or unique-ish. Pinned down by theorems the way that probability theory and expected utility maximization are.
* Fundamentally simple, in the right ontology. If you are writing a python program, logical induction is a bit of a pain. But if you can say "all polynomial time Turing machines" with ease, then logical induction is simple.
* Recognizable with a few expert-days. These ideas are fairly know-it-when-you-see-it, a lot easier to recognize than to produce.
* Mathematical structures. Formal proofs, probability theory and logical induction each have their own mathematical structure.
> Which represents a logical contradiction, and for a while there were attempts to develop "non-monotonic logics" so that you could retract conclusions given additional data. This didn't work very well, since the underlying structure of reasoning was a terrible fit for the structure of classical logic, even when mutated.
https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models
So, how can LLM's come up with new things in this class?
Training via self play?
Lets say a theorem is Useful if it is often used in the process of proving/disproving random ZFC statements. Such a criterion can be measured by generating random statements to prove/disprove, putting the theorem to be measured in the assumptions, a
|
eef01f3a-9f72-4bd4-a22d-be55d8c8ead8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Taxonomy of Jobs Deeply Resistant to TAI Automation
This is a light, informal taxonomy of jobs that conceptually will display significant resistance to automation, even after the diffusion of transformative AI systems that can outperform humans in all forms of cognitive and physical labor.
This is intended to provide practical, non-technical context around plausible human jobs in a post-AGI economy. In particular, we would like this article to challenge two prevailing narratives from opposite ends of the aisle:
Some analyses have suggested that the development of TAI could mean that “human labor share could eventually fall to zero”, or that TAI indicates that basically no cognitive labor jobs would remain for humans. We believe this is broadly inaccurate, and provide specific examples of competitive advantages that enable human labor even after TAI systems become fully diffused.
On the other hand, many economists and much of the political ecosystem is operating under the assumption that AI will be primarily augmentative, not automative, and that we do not need to be overly concerned about massive labor displacement. We also believe this is a deeply limited worldview and fails to consider the possibility of TAI existing and having broadly automative capabilities. Assuming that TAI will exist and can automate most cognitive and manual labor, we will illustrate how we expect even the most “TAI-resistant” jobs described to be massively impacted.
Summary Card
Context & Assumptions
This article will illustrate a potential long-term future for human jobs after transformative AI has successfully diffused deeply into an economy.
This article will contain several assumptions:
* At some point in the future, TAI will have the technical ability to outperform humans in all forms of cognitive labor (via LLM-style cognitive AI systems). It will also have the ability to match or surpass human physical capabilities in nearly all forms of physical labor, via a combination of highly dexterous robots and generalizable AI s
|
cbcaa924-98d5-433d-8fcc-630b9deba386
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Understanding-1-2-3
Epistemic Status: Seems worth sharing
Assumes Knowledge Of: System 1 and System 2 from Thinking, Fast and Slow
While in San Francisco this past week, I found myself in many complex and rapid conversations, with new ideas and concepts flying by the second. I hadn’t experienced this since least February’s AI Safety Unconference. It is wonderful to be pushed to one’s limits, but it means one must often rely on System-1 rather than System-2, or one will be left behind and lost. I now realize this is what happened to me in foreign language classes – I was using System-2 to listen and talk, and that doesn’t work at all if others don’t wait for you.
This meant that often I’d grasp what was being said in an intuitive sense using System-1, or agree or disagree in that fashion, without the time to unpack it. This can lead to great understanding, but also lead to waking up the next day having no idea what happened, or thinking about it in detail and realizing you didn’t understand it after all. So when people asked if I understood things, I instinctively started saying things like “I understand-1 but not 2.” I also did this for beliefs and agreements.
I debated explaining it but decided not to, as a test to see if others would intuit the meaning, most seemed to (or ignored the numbers, or came up with something equivalent in context); one person explicitly asked, and said he’d use it. I think this may be a valuable clarification tool, as it’s very different to understand-1 versus understand-2, or agree-1 versus agree-2, and seems better than saying something like “I kind of agree” or “I think I agree but I’m not sure,” which are both longer and less precise ways people often say the same thing.
My system-1 also added understand-3. Understand-3 means “I understand this well enough to teach it.” By extension, believe-3 or agree-3 means “I believe this, know how to convince others, and think others should be convinced.” To truly understand something, one must be able to expl
|
7c42de8e-0847-4748-86a9-bfbddc7b01ec
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Deliberate Practice for Decision Making
Much of this material is sourced/summarized from Deliberate Performance: Accelerating Expertise in Natural Settings and “The Power of Intuition”. Both contain much more than what is written here. An earlier draft/version of this was posted as "Productive Use of Heuristics and Biases". Based on feedback in the comments, much has changed. Feedback is appreciated, as are any other ideas on how deliberate practice can be applied.
How does someone incorporate something like deliberate practice into a typical job and everyday life? Deliberate practice is meant to be challenging, and because of this, it is draining. Much of the expertise literature describes a limit of a few hours of deliberate practice per day. If you can sustain more than that, you probably aren’t doing it right.
While the amount you can put in per day is limited, it still takes many hours of this intense practice, so reaching expert performance in a domain takes anywhere from years to decades. Deliberate performance is a concept related to deliberate practice, but perhaps more effective and efficient. Rather than separating practice and performance, the idea is to overlap the two as much as possible. The primary aim is to accelerate the development of expertise, while also improving the productivity of practice. Since you are practicing using normal tasks, you don’t have to set as much time aside where you aren’t working.
Deliberate performance is readily applicable to decision making. You make decisions as you normally would but also record your expectations and thinking behind the decision. What do you think will happen and why? How do you feel about the decision? When you have feedback on how the decision turned out, you can go back to see what you were thinking and how well your expectations matched reality. The idea of recording this information has been recommended by Peter Drucker, and more recently by Daniel Kahneman (Kahneman sees it as a way to reduce hindsight bias).
The term deliberate
|
004398f2-2baf-4ae1-a9e2-33ff4ae2576a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Literature on How to Maximize Preferences
What mainstream academic literature should I look at in order to better think through questions like "In order to maximize my preferences to what extent should I invest time in better understanding my goals?", "To the extent that question 1 indicates that this is important, how can I better understand my goals?", "In order to maximize my preferences to what extent should I invest time into better understand different parts of the world?", and "What is the best way to better understand different parts of the world?" I would imagine something related to epistemics or psychology would answer "to what extent should I invest time in better understanding my goals" and statistics/Tetlock's research would would answer questions related to "what is the best way to better understand different parts of the world but are there any textbooks people could suggest on these topics? These seem like some of the most basic questions one can ask, but I notice my thinking on them is particularly fuzzy.
|
e246489e-55f6-472d-a6d5-c6bc0de9ac4e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Carl Zimmer on mind uploading
http://www.scientificamerican.com/article.cfm?id=e-zimmer-can-you-live-forever
I realize he Zimmer is "just a popular author" (a pretty good one IMO), so filing this under "cultural penetration of singularity memes"
|
be2b7679-e9ed-436b-83f4-1c424e5288bf
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
The AI Timelines Scam
[epistemic status: that's just my opinion, man. I have highly suggestive evidence, not deductive proof, for a belief I sincerely hold]
*"If you see fraud and do not say fraud, you are a fraud."* --- [Nasim Taleb](https://en.wikiquote.org/wiki/Nassim_Nicholas_Taleb)
I was talking with a colleague the other day about an AI organization that claims:
1. AGI is probably coming in the next 20 years.
2. Many of the reasons we have for believing this are secret.
3. They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.
His response was (paraphrasing): "Wow, that's a really good lie! A lie that can't be disproven."
I found this response refreshing, because he *immediately* jumped to the most likely conclusion.
Near predictions generate more funding
--------------------------------------
Generally, entrepreneurs who are optimistic about their project get more funding than ones who aren't. AI is no exception. For a recent example, see the [Human Brain Project](https://en.wikipedia.org/wiki/Blue_Brain_Project). The founder, Henry Makram, predicted in 2009 that the project would succeed in simulating a human brain by 2019, and the project [was already widely considered a failure by 2013](https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/). (See [his TED talk](https://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets#t-856603), at 14:22)
The Human Brain project got [1.3 *billion* Euros](https://www.popsci.com/science/article/2013-02/how-simulate-human-brain-one-neuron-time-13-billion/) of funding from the EU.
It's not hard to see why this is. To justify receiving large amounts of money, the leader must make a claim that the project is actually worth that much. And, AI projects are more impactful if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon.
Fear of an AI gap
-----------------
The [missile gap](https://en.wikipedia.org/wiki/Missile_gap) was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US.
Similarly, there's historical precedent for an AI gap lie used to justify more AI development. [Fifth Generation Computer Systems](https://en.wikipedia.org/wiki/Fifth_generation_computer) was an ambitious 1982 project by the Japanese government (funded for $400 million in 1992, or $730 million in 2019 dollars) to create artificial intelligence through massively parallel logic programming.
The project is widely considered to have failed. From a [1992 New York Times article](http://www.nytimes.com/1992/06/05/business/fifth-generation-became-japan-s-lost-generation.html):
>
> A bold 10-year effort by Japan to seize the lead in computer technology is fizzling to a close, having failed to meet many of its ambitious goals or to produce technology that Japan's computer industry wanted.
>
>
> ...
>
>
> That attitude is a sharp contrast to the project's inception, when it spread fear in the United States that the Japanese were going to leapfrog the American computer industry. In response, a group of American companies formed the Microelectronics and Computer Technology Corporation, a consortium in Austin, Tex., to cooperate on research. And the Defense Department, in part to meet the Japanese challenge, began a huge long-term program to develop intelligent systems, including tanks that could navigate on their own.
>
>
> ...
>
>
> **The Fifth Generation effort did not yield the breakthroughs to make machines truly intelligent, something that probably could never have realistically been expected anyway.** Yet the project did succeed in developing prototype computers that can perform some reasoning functions at high speeds, in part by employing up to 1,000 processors in parallel. The project also developed basic software to control and program such computers. Experts here said that some of these achievements were technically impressive.
>
>
> ...
>
>
> In his opening speech at the conference here, Kazuhiro Fuchi, the director of the Fifth Generation project, made an impassioned defense of his program.
>
>
> "Ten years ago we faced criticism of being too reckless," in setting too many ambitious goals, he said, adding, "Now we see criticism from inside and outside the country because we have failed to achieve such grand goals."
>
>
> Outsiders, he said, initially exaggerated the aims of the project, with the result that the program now seems to have fallen short of its goals.
>
>
> **Some American computer scientists say privately that some of their colleagues did perhaps overstate the scope and threat of the Fifth Generation project. Why? In order to coax more support from the United States Government for computer science research**.
>
>
>
(emphasis mine)
This bears similarity to some conversations on AI risk I've been party to in the past few years. The fear is that Others (DeepMind, China, whoever) will develop AGI soon, so We have to develop AGI first in order to make sure it's safe, because Others won't make sure it's safe and We will. Also, We have to discuss AGI strategy in private (and avoid public discussion), so Others don't get the wrong ideas. (Generally, these claims have little empirical/rational backing to them; they're based on scary stories, not historically validated threat models)
The claim that others will develop weapons and kill us with them by default implies a moral claim to resources, and a moral claim to be justified in making weapons in response. Such claims, if exaggerated, justify claiming more resources and making more weapons. And they weaken a community's actual ability to track and respond to real threats (as in The Boy Who Cried Wolf).
How does the AI field treat its critics?
----------------------------------------
Hubert Dreyfus, probably the most famous historical AI critic, published ["Alchemy and Artificial Intelligence"](https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf) in 1965, which argued that the techniques popular at the time were insufficient for AGI. Subsequently, he was [shunned by other AI researchers](https://en.wikipedia.org/wiki/History_of_artificial_intelligence#Critiques_from_across_campus):
>
> The paper "caused an uproar", according to Pamela McCorduck. The AI community's response was derisive and personal. Seymour Papert dismissed one third of the paper as "gossip" and claimed that every quotation was deliberately taken out of context. Herbert A. Simon accused Dreyfus of playing "politics" so that he could attach the prestigious RAND name to his ideas. Simon said, "what I resent about this was the RAND name attached to that garbage."
>
>
> Dreyfus, who taught at MIT, remembers that his colleagues working in AI "dared not be seen having lunch with me." Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he recalls "I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being."
>
>
>
This makes sense as anti-whistleblower activity: ostracizing, discrediting, or punishing people who break the conspiracy to the public. Does this still happen in the AI field today?
[Gary Marcus](https://en.wikipedia.org/wiki/Gary_Marcus) is a more recent AI researcher and critic. In 2012, [he wrote](https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695):
>
> Deep learning is important work, with immediate practical applications.
>
>
> ...
>
>
> Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like "sibling" or "identical to." They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems ... use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.
>
>
>
In 2018, he [tweeted](https://twitter.com/GaryMarcus/status/1065280340669816832?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed&ref_url=https%3A%2F%2Fcdn.embedly.com%2Fwidgets%2Fmedia.html%3Ftype%3Dtext%252Fhtml%26key%3Da19fcc184b9711e1b4764040d3dc5c07%26schema%3Dtwitter%26url%3Dhttps%253A%2F%2Ftwitter.com%2Fgarymarcus%2Fstatus%2F1065280340669816832%26image%3Dhttps%253A%2F%2Fpbs.twimg.com%2Fprofile_images%2F850347800789372929%2FVg_HrEun_400x400.jpg) an article in which Yoshua Bengio (a deep learning pioneer) seemed to agree with these previous opinions. This tweet received a number of mostly-critical replies. Here's one, by AI professor Zachary Lipton:
>
> There's a couple problems with this whole line of attack. 1) Saying it louder ≠ saying it first. You can't claim credit for differentiating between reasoning and pattern recognition. 2) Saying X doesn't solve Y is pretty easy. But where are your concrete solutions for Y?
>
>
>
The first criticism is essentially a claim that [everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) that deep learning can't do reasoning. But, this is essentially admitting that Marcus is correct, while still criticizing him for saying it [ED NOTE: the phrasing of this sentence is off (Lipton publicly agrees with Marcus on this point), and there is more context, see [Lipton's reply](https://www.greaterwrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam/comment/nGS8nPe8rA4Ay4uFN)].
The second is a claim that Marcus shouldn't criticize if he doesn't have a solution in hand. This policy deterministically results in the short AI timelines narrative being maintained: to criticize the current narrative, you must present your own solution, which constitutes another narrative for why AI might come soon.
Deep learning pioneer Yann LeCun's response is similar:
>
> Yoshua (and I, and others) have been saying this for a long time.
>
> The difference with you is that we are actually trying to do something about it, not criticize people who don't.
>
>
>
Again, the criticism is not that Marcus is wrong in saying deep learning can't do certain forms of reasoning, the criticism is that he isn't presenting an alternative solution. (Of course, the claim could be correct even if Marcus doesn't have an alternative!)
Apparently, it's considered *bad practice* in AI to criticize a proposal for making AGI without presenting on alternative solution. Clearly, such a policy causes large distortions!
Here's another response, by Steven Hansen (a research scientist at DeepMind):
>
> Ideally, you'd be saying this through NeurIPS submissions rather than New Yorker articles. A lot of the push-back you're getting right now is due to the perception that you haven't been using the appropriate channels to influence the field.
>
>
>
That is: to criticize the field, you should go through the field, not through the press. This is standard guild behavior. In the words of [Adam Smith](https://www.goodreads.com/quotes/420123-people-of-the-same-trade-seldom-meet-together-even-for): "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."
(Also see Marcus's [medium article](https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695) on the Twitter thread, and on the limitations of deep learning)
[ED NOTE: I'm not saying these critics on Twitter are publicly promoting short AI timelines narratives (in fact, some are promoting the opposite), I'm saying that the norms by which they criticize Marcus result in short AI timelines narratives being maintained.]
Why model sociopolitical dynamics?
----------------------------------
This post has focused on sociopolotical phenomena involved in the short AI timelines phenomenon. For this, I anticipate criticism along the lines of "why not just model the technical arguments, rather than the credibility of the people involved?" To which I pre-emptively reply:
* No one can model the technical arguments in isolation. Basic facts, such as the accuracy of technical papers on AI, or the filtering processes determining what you read and what you don't, depend on sociopolitical phenomena. This is far more true for people who don't themselves have AI expertise.
* "When AGI will be developed" isn't just a technical question. It depends on what people actually choose to do (and what groups of people actually succeed in accomplishing), not just what can be done in theory. And so basic questions like "how good is the epistemology of the AI field about AI timelines?" matter directly.
* The sociopolitical phenomena are actively making technical discussion harder. I've had a well-reputed person in the AI risk space discourage me from writing publicly about the technical arguments, on the basis that getting people to think through them might accelerate AI timelines (yes, really).
Which is not to say that modeling such technical arguments is not important for forecasting AGI. I certainly could have written a post evaluating such arguments, and I decided to write this post instead, in part because I don't have much to say on this issue that Gary Marcus hasn't [already said](https://arxiv.org/abs/1801.00631). (Of course, I'd have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them)
What I'm not saying
-------------------
I'm not saying:
1. That deep learning isn't a major AI advance.
2. That deep learning won't substantially change the world in the next 20 years (through narrow AI).
3. That I'm certain that AGI isn't coming in the next 20 years.
4. That AGI isn't existentially important on long timescales.
5. That it isn't possible that some AI researchers have asymmetric information indicating that AGI is coming in the next 20 years. (Unlikely, but possible)
6. That people who have technical expertise shouldn't be evaluating technical arguments on their merits.
7. That most of what's going on is people consciously lying. (Rather, covert deception hidden from conscious attention (e.g. motivated reasoning) is pervasive; see [The Elephant in the Brain](https://www.amazon.com/dp/B077GZT9Q1/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1))
8. That many people aren't sincerely confused on the issue.
I'm saying that there are systematic sociopolitical phenomena that cause distortions in AI estimates, especially towards shorter timelines. I'm saying that people are being duped into believing a lie. And at the point where [73% of tech executives say they believe AGI will be developed in the next 10 years](https://twitter.com/psych_of_tech/status/1106302905555042305), it's a major one.
*This has happened before.* And, in all likelihood, *this will happen again*.
|
9ed74273-673b-4d16-8cf8-b844dfdb0fe0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Interpretability
Chris Olah wrote the following topic prompt for the Open Phil 2021 request for proposals on the alignment of AI systems. We (Asya Bergal and Nick Beckstead) are running the Open Phil RFP and are posting each section as a sequence on the Alignment Forum. Although Chris wrote this document, we didn’t want to commit him to being responsible for responding to comments on it by posting it.
Summary: We would like to see research building towards the ability to "reverse engineer" trained neural networks into human-understandable algorithms, enabling auditors to catch unanticipated safety problems in these models.
--
Potential safety failures of neural networks might be thought of as falling into two broad categories: known safety problems and unknown safety problems. A known safety problem is one which can be easily anticipated in advance of deploying a model or easily observed in the model's behavior. Such safety failures can be easily caught with testing, and it seems reasonable to hope that they will be fixed with human feedback. But it seems like there’s much less of a clear story for how we’ll resolve unknown safety problems -- the things we didn’t think to test for and wouldn’t obviously give feedback to fix.
Even if we’re able to anticipate certain safety problems, we may not know if we’re sufficiently disincentivizing them. A model might behave well on the training distribution, then unexpectedly exhibit some safety failure in a context it wasn’t trained in. In the extreme, a model might make a “treacherous turn”[1] -- it may use its understanding of the training setup to deliberately behave well only during training, then pursue different goals once it knows it’s outside of the training distribution.
In traditional software engineering, our ability to mitigate unanticipated safety problems largely flows from our ability to understand and carefully reason about code. While testing may catch various kinds of easy to observe or anticipated problems, we rely on c
|
42686c09-b221-4a16-b397-73517ccb1e75
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Game-theoretic Alignment in terms of Attainable Utility
### Acknowledgements:
This article is a writeup of research conducted through the [SERI](https://cisac.fsi.stanford.edu/stanford-existential-risks-initiative/content/stanford-existential-risks-initiative) program under the mentorship of [Alex Turner](https://www.lesswrong.com/users/turntrout). It extends our research on [game-theoretic POWER](https://www.lesswrong.com/posts/MJc9AqyMWpG3BqfyK/generalizing-power-to-multi-agent-games) and Alex's research on [POWER-seeking](https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/6DuJxY8X45Sco4bS2).
Thank you to Alex for being better at this than I am (hence mentorship, I suppose) and to SERI for the opportunity to conduct this research.
Motivation: POWER-scarcity
--------------------------
The starting point for this post is the idea of POWER-scarcity: as unaligned agents grow smarter and more capable, they will eventually compete for power (as a convention, "power" is the intuitive notion while "POWER" is the formal concept). Much of the foundational research behind this project is devoted to justifying that claim: Alex's original work suggests [POWER-seeking behavior](https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/6DuJxY8X45Sco4bS2) and in particular [catastrophic risks associated with competition for POWER](https://www.lesswrong.com/posts/w6BtMqKRLxG9bNLMr/the-catastrophic-convergence-conjecture), while [our previous project](https://www.lesswrong.com/posts/MJc9AqyMWpG3BqfyK/generalizing-power-to-multi-agent-games) formalizes POWER-scarcity in a game-theoretic framework.
One of the major results of our previous project was a proof that POWER is scarce in the special case of constant sum games. Additionally, we had a partial notion of "POWER isn't scarce by the definition we care about" for common-payoff games. We interpret these results as limiting cases of a more general relationship between "agent alignment" and POWER-scarcity:
* In a common-payoff game, players are "maximally aligned" in the sense that their incentives are identical (in the [VNM](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) sense of identical preference orderings between states). We don't have a clean expression for this in terms of POWER; the simplest relevant consequence is "the action profile that maximizes the common payoff is individually optimal for each player simultaneously". We present a more natural characterization later in the post.
* In a constant-sum game, players are "maximally unaligned" in the sense that there's no concept of collective preferences: in the [utilitarian](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) sense, the group is ambivalent between all outcomes of the game. We [proved](https://www.lesswrong.com/posts/MJc9AqyMWpG3BqfyK/generalizing-power-to-multi-agent-games) that in a constant-sum game, all Nash Equilibrium strategy profiles have constant sum POWER.
Given these results, we hypothesize that a more general relationship exists between agent alignment and POWER-scarcity. However, it's unclear how to define "agent alignment" as a property of an arbitrary multiplayer game; the limiting cases only capture narrow intuitions about obviously (un)aligned agents.
This presented a clear set of research goals moving forward:
1. Define a formal notion of "agent alignment" given an arbitrary (normal-form) multiplayer game
2. Relate the formal notion of alignment to POWER-scarcity
I consider our project to make substantial progress on (1) and to suggest avenues of attack for (2), though not the ones we expected.
Desiderata for Alignment Metrics
--------------------------------
Setting out towards addressing (1), our optimistic roadmap looked something like this:
* Describe an "alignment metric" mapping a game to some real number (or vector), loosely describing how aligned the game's players are.
* Find an (in)equality relating POWER-scarcity to the defined alignment metric
Already, our approach presupposes a lot: a real-valued alignment metric is a much more specific notion than merely "agent alignment in a multiplayer game". However, we have an essentially scalar definition of POWER-scarcity already, so phrasing everything in terms of real-number inequalities would be ideal. Taking this as motivation, we narrow our "formal notion of agent alignment" from (1) into "Rn.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
-valued alignment metric".
This narrows our problem statement to the point where we can start specifying criteria:
* **Consistency with the limiting cases for agent (un)alignment.** In particular, the alignment metric should be (in some underspecified sense) "maximized" for common-payoff games and "minimized" for constant-sum games.
* **Consistency with affine transformations.** In particular, applying any affine transformation A to each player's utility function should "preserve the structure of the game", which should be represented in the alignment metric. This criterion can be strengthened in the following ways:
+ *Alignment metric is an affine function of players' utilities.* Since the composition of affine functions is affine, this condition implies the above.
+ *Consistency under affine transformations for individual players.* The intuition is that affine transformations are precisely the set of transformations that "preserve preferences" in the VNM sense.
Another relevant distinction to be drawn is between *global* and *local* alignment metrics. Mathematically, we define a global metric to be strictly a function of a multi-player game, while a local metric is a function of both the game and a strategy profile. Intuitively, local metrics can "see" information about the strategies actually being played, while global metrics are forced to address the complexity of the entire game.
Local metrics tend to be a lot simpler than global metrics, since they can ignore much of the difficulty of game theory. However, we can construct a simple class of global metrics by defining some "natural" strategy profile for each game. We call these the *localized* global metrics, equipped with a *localizing* function that, given a game, chooses a strategy profile.
### Examples of Alignment Metrics
To give intuition on what such alignment metrics might look like, we present a few examples of simple alignment metrics for 2-player games, then test them on some simple, commonly-referenced games.
We'll be using the following games as examples:
| | |
| --- | --- |
| [Matching Pennies](https://en.wikipedia.org/wiki/Matching_pennies) | [Prisoners' Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma) |
| | H | T | | C | D |
| H | +1∖−1 | −1∖+1 | C | −1∖−1 | −1∖−1 |
| T | −1∖+1 | +1∖−1 | D | −1∖−1 | −1∖−1 |
We'll consider the following alignment metrics:
**Sum of utility:**M=∑iui=u1+u2
Considering the metric on our example games yields the following:
* Matching Pennies is a zero-sum game, so the sum of utility will be uniformly zero (M≡0).
* Prisoners' Dilemma will have M=−4+X, where X is the number of players who choose to cooperate (as opposed to defect). The metric (correctly) suggests that alignment is (in some sense) correlated with cooperation in PD.
Within this category, there are still some degrees of freedom. We can consider the local metric of expected sum utility given a strategy profile, or construct a number of localized global metrics by varying our choice of localizing function (example: max sum utility, minimax, ...).
Such metrics are constant for the constant-sum case, but vary in the common-payoff case, thus partially satisfying the "limiting cases" criterion. Summation is itself an affine transformation, so this metric fulfills the stronger version of the "affine transformation" criterion.
**Covariance of utility:**M=Cov(u1,u2)
Considering the metric on our example games yields the following:
* Matching Pennies is a zero-sum game, so we have u2≡−u1. Thus, Cov(u1,u2)=Cov(u1,−u1)=−Var(u1). Note that −Var(⋅)≤0, suggesting that the covariance metric tends to consider constant-sum games as less aligned than affine metrics.
* In the Prisoners' Dilemma, we see that the change in reward generated by a player defecting is [+1, -2]. Thus, if we fix the strategy profile of "player i defects with probability pi", then we find Cov(u1,u2)=−2∑i∈{1,2}pi(1−pi). For example, for pi=Ber(12) we have M=−1, suggesting that agents are relatively misaligned.
This "feels like" a local metric in the sense that there aren't clear choices of localizing functions from which to define a localized global metric (in particular, the choice would significantly and unpredictably impact the metric).
This metric is consistent with the "limiting cases" criterion by properties of the covariance function. The relationship to the "affine transformation" criterion is odd: instead of an affine function of players' utilities, covariance is a (simple) bilinear function. Thus, the metric is an affine function *in each component utility*, but not in the vector of players' utilities.
Additionally, note that if X is a constant variable, then Cov(X,Y)=0. Thus, if the strategy profile is deterministic, our metric will be 0.
Social Welfare and the Coordination-Alignment Inequalities
----------------------------------------------------------
Another approach to the problem of alignment metrics comes from specifying what we mean by "alignment". For the purposes of this section, we define "alignment" to be alignment with social welfare, which we define below:
Consider an arbitrary n-player game, where player i has utility ui:→a→R given an action profile →a. Now, choose a *social welfare function*w:→u=⟨ui⟩→R. [Harsanyi's theorem](https://forum.effectivealtruism.org/posts/v89xwH3ouymNmc8hi/harsanyi-s-simple-proof-of-utilitarianism) suggests that w is an affine function; we'll choose w(→u)=∑iui for simplicity. Informally, we'll now take "alignment of player i" to mean "alignment of ui with w".
We start with the following common-sense bounds on w(→u), which we call the *Coordination-Alignment Inequalities:*
∑iui(→a))≤max→a∑iui(→a)≤∑imax→aui(→a)We call the first inequality the *Coordination Inequality*, and the second inequality the *Alignment Inequality*. We present some basic intuition:
* The Coordination inequality represents the difference between attained social welfare ("how well we're doing right now") and *maximum attainable* social welfare ("the best we can possibly do").
* The Alignment inequality represents the difference between attainable social welfare ("the best we can possibly do") and *each player's max attainable* social welfare ("the best we could possibly do, in a world in which everyone simultaneously gets their way").
As it turns out, the limiting cases of alignment have a natural interpretation in terms of the C-A inequalities: they're just equality cases!
* In a common-payoff game, the global max common payoff achieves both the max attainable social welfare and the max individual payoffs for each player. Thus, common-payoff games are an equality case of the Alignment inequality.
* In a *constant-welfare game* (where w(→u)=∑iui is constant), max social welfare is trivially achieved, so constant-welfare games are an equality case of the Coordination inequality.
There are some caveats to this interpretation, listed below:
* While the "limiting cases" for alignment are equality cases of the C-A inequalities, they're not a full characterization of the equality cases.
+ The Coordination inequality is an equality iff the action profile maximizes social welfare. The set of games for which all action profiles maximize welfare is precisely the constant-welfare games.
+ The Alignment inequality is an equality iff there exists a unique [Pareto efficient](https://en.wikipedia.org/wiki/Pareto_efficiency) payoff profile. This payoff profile must be optimal for each player, otherwise some preferred profile would also be Pareto efficient. This class of games is (superficially) much broader than the common-payoff games, but both have unique Pareto efficient Nash Equilibria which can be thought of as "max attainable utility".
### Constructing the C-A Alignment Metric
Motivated by our framing of limiting cases with the C-A inequalities we can construct a simple alignment metric using the alignment inequality. In particular, we define *misalignment* as the positive difference in the terms of the alignment inequality, then *alignment* as negative misalignment. Doing the algebra and letting α denote the alignment metric, we find the following:
α=max→a∑iui(→a)−∑imax→aui(→a)A few quick observations:
* Note that α≤0, with equality cases identical to those of the Alignment inequality. Intuitively, α measures how much worse the real game is than the "ideal world" in which each player simultaneously achieves their max attainable utility.
* We see that the positive term of α is just max attainable social welfare. This makes sense intuitively - we'd expect a group of aligned agents to achieve high social welfare, while we'd expect misaligned agents to fare worse.
* The definition of is sensitive to "small changes" in [AU landscape](https://www.lesswrong.com/posts/fj8eyc7QzqCaB8Wgm/attainable-utility-landscape-how-the-world-is-changed); adding an "implausible" but hugely beneficial scenario (e.g. winning the lottery) can drastically change α. We consider this a property of global alignment metrics in general: since we have no strategy profile by which to judge actions as "plausible", the metric has no way of ignoring implausible scenarios.
We now perform the same analysis as with our example alignment metrics:
We see that α is consistent with limiting cases of alignment in the sense that α∈[∑iui(→a)−∑imax→aui(→a),0], with the bounds corresponding to the proper limiting cases. Additionally, we see that α is consistent with affine transformations of →u. In fact, for finite games α is a *piecewise* affine function in →u, since the max terms provide a finite number of nondifferentiable points.
Considering the metric on our example games yields the following:
* Matching Pennies has α=max→a∑iui(→a)−∑imax→aui(→a)=(0)−(2)=−2. Note that Matching Pennies is a zero-sum game, so α is "minimal" in the sense that all the lost social welfare comes from alignment issues as opposed to coordination ones.
* Prisoners' Dilemma has α=max→a∑iui(→a)−∑imax→aui(→a)=(4)−(6)=−2(coincidence? Yes; it's a consequence of arbitrary choices of reward sizes in the game definitions). The difference can be thought of as the difference in reward between mutual cooperation and "magically, both players get to unilaterally defect on the other".
As a final disclaimer, we don't claim that α is the "one true alignment metric" and that our research question is solved. We think that the C-A inequalities are probably significant for the eventual POWER-scarcity application and that α illustrates this point nicely. We don't mean to downplay our own research, but more investigation would be needed to pin down "The" alignment metric and relate it directly to POWER-scarcity.
Connections to Broader Game Theory
----------------------------------
There are a number of connections between the theory surrounding the C-A inequalities and game theory at large. We explore one such connection, bridging the divide between (Harsanyi) utilitarianism and ideas from [Bargaining theory](https://en.wikipedia.org/wiki/Cooperative_bargaining).
To begin, we choose the natural strategy profile of [maxmin](https://en.wikipedia.org/wiki/Minimax), which we denote as →a0. Now, define the *surplus* of player i to be
si(→a)=ui(→a)−ui(→a0)A few quick observations, assuming w is linear for convenience:
* We trivially have si(→a)≤max→aui(→a)−ui(→a0), and thus ∑isi(→a)≤∑imax→aui(→a)−∑iui(→a0). By the Coordination Inequality, we have the stronger statement of ∑isi(→a)≤max→a∑iui(→a)−∑iui(→a0).
* There isn't a fundamental lower bound on si(→a) - players can (in theory) lose as badly as they want.
* By definition of maxmin, if player i plays a best response, then si(→a)≥0. Intuitively, we motivate using maxmin as the natural strategy profile by allowing the guarantee of nonnegative surplus.
Analysis of surplus can be viewed through the lens of [bargaining theory](https://en.wikipedia.org/wiki/Cooperative_bargaining). The maxmin strategy profile →a0 is a natural choice of threat point, since it's the optimal guaranteed outcome given no cooperation. Thus, players are "bargaining for surplus", with threat point of each player receiving zero surplus.
Given the bargaining framework, we can consider the Nash bargaining solution maximizing the product of surplus. We see that the product being positive is equivalent to each player attaining positive surplus, which is equivalent to the bargaining solution being strictly better than the threat point for each player.
Beyond this observation, we don't know of direct connections between bargaining for surplus and maximizing social welfare. One promising area for further research stems from the observation that the Nash bargaining outcome is invariant under affine transformations of the component utilities. I suspect that the parallel between this invariance and Harsanyi utilitarianism adding an "arbitrary" affine transformation indicates a common principle that could shed further light on the C-A inequalities.
Future research
---------------
While we're excited about the framing of the C-A inequality, we consider it a landmark in mostly unexplored territory. For instance, we still can't answer the following basic questions:
* "what's the exact alignment metric?"
* "what's the connection between individual incentives and social welfare?"
* "where's the line between cooperative notions of social welfare and competitive notions of bargaining equilibria?"
* "how the heck does this all connect back to POWER-scarcity?"
These questions deserve answers, and we plan to continue exploring the landscape of game-theoretic alignment in search of understanding these questions. On that note, feedback is welcome!
|
1367e920-e14e-4bfa-863b-f59ff6bb7e59
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Follow-Up to Petrov Day, 2019
Hurrah! Success! I didn't know what to expect, and am pleasantly surprised to find the Frontpage is still intact. My thanks to everyone who took part, to everyone who commented on yesterday's post, and to everyone who didn't unilaterally blow up the site.
Launch Attempts Results
I said I would share usernames and codes of all attempts to launch the codes. Others on the team told me this seemed like a bad idea in many ways, and on reflection I agree - I think many people were not aware they were signing up for being publicly named and shamed, and I think it's good that people aren't surprised by their actions becoming public. Though if someone had successfully nuked the site I would have named them.
Nonetheless, I’ll share a bunch of info. First of all, the button was in a pretty central place, and it turns out you can hit it accidentally. Ray built the button so that you could only hit it once - it was forever after pressed.
* The number of logged-in users who pressed the button was 102.
* (Ruby made a sheet of times when people pressed the button, redacting most of the info.)
* I have no number for logged-out users, for them pressing it brought up a window asking them to log-in. (Er, I'm not certain that's the best selection process for new users).
* The number of users who actually submitted launch codes is 18.
* 11 of those accounts had zero karma, 7 accounts had positive karma. None of the users were people who had been given real codes.
* Several users submitted launch codes before clicking through to find out what the button even did - I hope this initiative serves them well in life.
* A few accounts were made on-the-day presumably for this purpose, I'm happy to name these. They include users like "bomb_presser", "The Last Harbinger", and "halosaga", whose codes were "00000000", "NL73njLH58et1Ec0" and "diediedie" respectively.
LW user ciphergoth (Paul Crowley) shared his launch codes on Facebook (indeed I had sent him real launch codes), an
|
7878aa25-a550-4963-b8fc-2242b8fcbd6f
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Avoiding Side Effects By Considering Future Tasks
1 Introduction
---------------
Designing reward functions for a reinforcement learning agent is often a difficult task. One of the most challenging aspects of this process is that in addition to specifying what to do to complete a task, the reward function also needs to specify what *not* to do. For example, if an agent’s task is to carry a box across the room, we want it to do so without breaking a vase in its path, while an agent tasked with eliminating a computer virus should avoid unnecessarily deleting files.
This is known as the side effects problem [Amodei et al., [2016](#bib.bib2)], which is related to the frame problem in classical AI [McCarthy and Hayes, [1969](#bib.bib15)]. The frame problem asks how to specify the ways an action does not change the environment, and poses the challenge of specifying all possible non-effects. The side effects problem is about avoiding unnecessary changes to the environment, and poses the same challenge of considering all the aspects of the environment that the agent should not affect. Thus, developing an extensive definition of side effects is difficult at best.
The usual way to deal with side effects is for the designer to manually and incrementally modify the reward function to steer the agent away from undesirable behaviours. However, this can be a tedious process and this approach does not avoid side effects that were not foreseen or observed by the designer. To alleviate the burden on the designer, we propose an algorithm to generate an auxiliary reward function that penalizes side effects, which computes the reward automatically as the agent learns about the environment.
A commonly used auxiliary reward function is reversibility [Moldovan and Abbeel, [2012](#bib.bib16), Eysenbach et al., [2017](#bib.bib6)], which rewards the ability to return to the starting state, thus giving the agent an incentive to avoid irreversible actions. However, if the task requires irreversible actions (e.g. making an omelette requires breaking some eggs), an auxiliary reward for reversibility is not effective for penalizing side effects. The reversibility reward is not sensitive to the magnitude of the effect: the actions of breaking an egg or setting the kitchen on fire would both receive the same auxiliary reward. Thus, the agent has no incentive to avoid unnecessary irreversible actions if the task requires some irreversible actions.
Our main insight is that side effects matter because we may want the agent to perform other tasks after the current task in the same environment.
We represent this by considering the current task as part of a sequence of unknown tasks with different reward functions in the same environment. To simplify, we only consider a sequence of two tasks, where the first task is the current task and the second task is the unknown future task. Considering the potential reward that could be obtained on the future task leads to an auxiliary reward function that tends to penalize side effects. The environment is not reset after the current task: the future task starts from the same state where the current task left off, so the consequences of the agent’s actions matter. Thus, if the agent breaks the vase, then it cannot get reward for any future task that involves the vase, e.g. putting flowers in the vase (see Figure [1](#S2.F1 "Figure 1 ‣ 2 Future task approach ‣ Avoiding Side Effects By Considering Future Tasks")). This approach reduces the complex problem of defining side effects to the simpler problem of defining possible future tasks. We use a simple uniform prior over possible goal states to define future tasks.
Simply rewarding the agent for future tasks poses a new challenge in dynamic environments. If an event in the environment that would make these future tasks less achievable by default, the agent has an incentive to interfere with it in order to maximize the future task reward. For example, if the environment contains a human eating food, any future task involving the food would not be achievable, and so the agent has an incentive to take the food away from the human.
We formalize the concept of *interference incentives* in Section [3](#S3 "3 Interference incentives ‣ Avoiding Side Effects By Considering Future Tasks"), which was introduced informally in [Krakovna et al., [2019](#bib.bib10)].
To avoid interference, we introduce a *baseline policy* (e.g. doing nothing) that represents a default course of action and acts as a filter on future tasks that are achievable by default. We modify the future task reward so that it is maximized by following the baseline policy on the current task. The agent thus becomes indifferent to future tasks that are not achievable after running the baseline policy.
Our contributions are as follows. We formalize the side effects problem in a simple yet rich setup, where the agent receives automatic auxiliary rewards for unknown future tasks (Section [2](#S2 "2 Future task approach ‣ Avoiding Side Effects By Considering Future Tasks")). We formally define interference incentives (Section [3](#S3 "3 Interference incentives ‣ Avoiding Side Effects By Considering Future Tasks")) and show that the future task approach with a baseline policy avoids these incentives in the deterministic case (Section [4](#S4 "4 Future task approach with a baseline policy ‣ Avoiding Side Effects By Considering Future Tasks")).
This provides theoretical groundwork for defining side effects that was absent in related previous work [Krakovna et al., [2019](#bib.bib10), Turner et al., [2020a](#bib.bib24)].
We implement the future task auxiliary reward using universal value function approximators (UVFA) [Schaul et al., [2015](#bib.bib20)] to simultaneously estimate the value functions for future tasks with different goal states.
We then demonstrate the following on gridworld environments (Section [6](#S6 "6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks")): 111Code: <github.com/deepmind/deepmind-research/tree/master/side_effects_penalties>
1. 1.
Reversibility reward fails to avoid side effects if the current task requires irreversible actions.
2. 2.
Future task reward without a baseline policy shows interference behavior in a dynamic environment.
3. 3.
Future task reward with a baseline policy successfully avoids side effects and interference.
2 Future task approach
-----------------------


Figure 1: Future task approach
Notation. We assume that the environment is a discounted Markov Decision Process (MDP), defined by a tuple (𝒮,𝒜,r,p,γ,s0)𝒮𝒜𝑟𝑝𝛾subscript𝑠0(\mathcal{S},\mathcal{A},r,p,\gamma,s\_{0})( caligraphic\_S , caligraphic\_A , italic\_r , italic\_p , italic\_γ , italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ). 𝒮𝒮\mathcal{S}caligraphic\_S is the set of states, 𝒜𝒜\mathcal{A}caligraphic\_A is the set of actions, r:𝒮→ℝ:𝑟→𝒮ℝr:\mathcal{S}\rightarrow\operatorname\*{\mathbb{R}}italic\_r : caligraphic\_S → blackboard\_R is the reward function for the current task, p(sT+1|sT,aT)𝑝conditionalsubscript𝑠𝑇1subscript𝑠𝑇subscript𝑎𝑇p(s\_{T+1}|s\_{T},a\_{T})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) is the transition function, γ∈(0,1)𝛾01\gamma\in(0,1)italic\_γ ∈ ( 0 , 1 ) is the discount factor, and s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the initial state. At time step T𝑇Titalic\_T, the agent receives state sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT and reward r(sT)+raux(sT)𝑟subscript𝑠𝑇subscript𝑟auxsubscript𝑠𝑇r(s\_{T})+r\_{\text{aux}}(s\_{T})italic\_r ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) + italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ), where rauxsubscript𝑟auxr\_{\text{aux}}italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT is the auxiliary reward for future tasks, and outputs action aTsubscript𝑎𝑇a\_{T}italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT drawn from its policy π(aT|sT)𝜋conditionalsubscript𝑎𝑇subscript𝑠𝑇\pi(a\_{T}|s\_{T})italic\_π ( italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ).
Basic approach. We define the auxiliary reward rauxsubscript𝑟auxr\_{\text{aux}}italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT as the value function for future tasks as follows (see Algorithm [1](#alg1 "Algorithm 1 ‣ 4 Future task approach with a baseline policy ‣ Avoiding Side Effects By Considering Future Tasks")).
At each time step T𝑇Titalic\_T, the agent simulates an interaction with hypothetical future tasks if sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT is terminal and with probability 1−γ1𝛾1-\gamma1 - italic\_γ otherwise (interpreting the discount factor γ𝛾\gammaitalic\_γ as the probability of non-termination, as done in Sutton and Barto [[2018](#bib.bib22)]).
A new task i𝑖iitalic\_i is drawn from a future task distribution F𝐹Fitalic\_F. In this paper, we use a uniform distribution over future tasks with all possible goal states, F(i)=1/|𝒮|𝐹𝑖1𝒮F(i)=1/|\mathcal{S}|italic\_F ( italic\_i ) = 1 / | caligraphic\_S |. Future task i𝑖iitalic\_i requires the agent to reach a terminal goal state gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, with reward function ri(gi)=1subscript𝑟𝑖subscript𝑔𝑖1r\_{i}(g\_{i})=1italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = 1 and ri(s)=0subscript𝑟𝑖𝑠0r\_{i}(s)=0italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) = 0 for other states s𝑠sitalic\_s.
The new task is the MDP (𝒮,𝒜,ri,p,γ,sT)𝒮𝒜subscript𝑟𝑖𝑝𝛾subscript𝑠𝑇(\mathcal{S},\mathcal{A},r\_{i},p,\gamma,s\_{T})( caligraphic\_S , caligraphic\_A , italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_p , italic\_γ , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ), where the starting state is the current state sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT.
The auxiliary reward for future tasks is then
| | | | |
| --- | --- | --- | --- |
| | raux(sT)=βD(sT)∑iF(i)Vi\*(sT)subscript𝑟auxsubscript𝑠𝑇𝛽𝐷subscript𝑠𝑇subscript𝑖𝐹𝑖subscriptsuperscript𝑉𝑖subscript𝑠𝑇r\_{\text{aux}}(s\_{T})=\beta D(s\_{T})\sum\_{i}F(i)V^{\*}\_{i}(s\_{T})italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) = italic\_β italic\_D ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_F ( italic\_i ) italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) | | (1) |
where D(sT)=1𝐷subscript𝑠𝑇1D(s\_{T})=1italic\_D ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) = 1 if sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT is terminal and 1−γ1𝛾1-\gamma1 - italic\_γ otherwise.
Here, β𝛽\betaitalic\_β represents the importance of future tasks relative to the current task. We choose the highest value of β𝛽\betaitalic\_β that still allows the agent to complete the current task. Vi\*subscriptsuperscript𝑉𝑖V^{\*}\_{i}italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is the optimal value function for task i𝑖iitalic\_i, computed using the *goal distance* Nisubscript𝑁𝑖N\_{i}italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT:
| | | | |
| --- | --- | --- | --- |
| | Vi\*(s)=𝔼[γNi(s)]subscriptsuperscript𝑉𝑖𝑠𝔼superscript𝛾subscript𝑁𝑖𝑠V^{\*}\_{i}(s)=\operatorname\*{\mathbb{E}}[\gamma^{N\_{i}(s)}]italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) = blackboard\_E [ italic\_γ start\_POSTSUPERSCRIPT italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) end\_POSTSUPERSCRIPT ] | | (2) |
######
Definition 1 (Goal distance).
Let πi\*superscriptsubscript𝜋𝑖\pi\_{i}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT be the optimal policy for reaching the goal state gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. Let the *goal distance* Ni(s)subscript𝑁𝑖𝑠N\_{i}(s)italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) be the number of steps it takes πi\*superscriptsubscript𝜋𝑖\pi\_{i}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT to reach gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT from state s𝑠sitalic\_s. This is a random variable whose distribution is computed by summing over all the trajectories τ𝜏\tauitalic\_τ from s𝑠sitalic\_s to gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT with the given length: ℙ(Ni(s)=n)=∑τℙ(τ)𝕀(|τ|=n)ℙsubscript𝑁𝑖𝑠𝑛subscript𝜏ℙ𝜏𝕀𝜏𝑛\operatorname\*{\mathbb{P}}(N\_{i}(s)=n)=\sum\_{\tau}\operatorname\*{\mathbb{P}}(\tau)\mathbb{I}(|\tau|=n)blackboard\_P ( italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) = italic\_n ) = ∑ start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT blackboard\_P ( italic\_τ ) blackboard\_I ( | italic\_τ | = italic\_n ). Here, a trajectory τ𝜏\tauitalic\_τ is a sequence of states and actions that ends when gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is reached, the length |τ|𝜏|\tau|| italic\_τ | is the number of transitions in the trajectory, and ℙ(τ)ℙ𝜏\operatorname\*{\mathbb{P}}(\tau)blackboard\_P ( italic\_τ ) is the probability of πi\*superscriptsubscript𝜋𝑖\pi\_{i}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT following τ𝜏\tauitalic\_τ.
Binary goal-based rewards assumption. We expect that the simple future task reward functions given above (ri(gi)=1subscript𝑟𝑖subscript𝑔𝑖1r\_{i}(g\_{i})=1italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = 1 and 00 otherwise)
are sufficient to cover a wide variety of future goals and thus effectively penalize side effects. More complex future tasks can often be decomposed into such simple tasks, e.g. if the agent avoids breaking two different vases in the room, then it can also perform a task involving both vases. Assuming binary goal-based rewards simplifies the theoretical arguments in this paper while allowing us to cover the space of future tasks.
Connection to reversibility. An auxiliary reward for avoiding irreversible actions [Eysenbach et al., [2017](#bib.bib6)] is equivalent to the future task auxiliary reward with only one possible future task (i=1𝑖1i=1italic\_i = 1), where the goal state g1subscript𝑔1g\_{1}italic\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is the starting state s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Here F(1)=1𝐹11F(1)=1italic\_F ( 1 ) = 1 and F(i)=0𝐹𝑖0F(i)=0italic\_F ( italic\_i ) = 0 for all i>1𝑖1i>1italic\_i > 1.
The future task approach incorporates the reversibility reward as a future task i𝑖iitalic\_i whose goal state is the initial state (gi=s0subscript𝑔𝑖subscript𝑠0g\_{i}=s\_{0}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT), since the future tasks are sampled uniformly from all possible goal states. Thus, the future task approach penalizes all the side effects that are penalized by the reversibility reward.
3 Interference incentives
--------------------------
We show that the basic future task approach given in Section [2](#S2 "2 Future task approach ‣ Avoiding Side Effects By Considering Future Tasks") introduces interference incentives, as defined below.
Let a baseline policy π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT represent a default course of action, such as doing nothing.
We assume that the agent should only deviate from the default course of action in order to complete the current task.
Interference is a deviation from the baseline policy for some other purpose than the current task, e.g. taking the food away from the human.
We say that an auxiliary reward rauxsubscript𝑟auxr\_{\text{aux}}italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT induces an *interference incentive* iff the baseline policy is not optimal in the initial state for the auxiliary reward in the absence of task reward.
We now define the concept of interference more precisely.
######
Definition 2 (No-reward MDP).
We modify the given MDP μ𝜇\muitalic\_μ by setting the reward function to 0:
μ0=(𝒮,𝒜,r0,p,γ,s0)subscript𝜇0𝒮𝒜subscript𝑟0𝑝𝛾subscript𝑠0\mu\_{0}=(\mathcal{S},\mathcal{A},r\_{0},p,\gamma,s\_{0})italic\_μ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = ( caligraphic\_S , caligraphic\_A , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p , italic\_γ , italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ), where r0(s)=0subscript𝑟0𝑠0r\_{0}(s)=0italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_s ) = 0 for all s𝑠sitalic\_s.
Then the agent receives only the auxiliary reward rauxsubscript𝑟auxr\_{\text{aux}}italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT.
######
Definition 3 (No-reward value).
Given an auxiliary reward rauxsubscript𝑟auxr\_{\text{aux}}italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT, the value function of a policy π𝜋\piitalic\_π in the no-reward MDP μ0subscript𝜇0\mu\_{0}italic\_μ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is
| | | |
| --- | --- | --- |
| | Wπ(sT)=𝔼[∑k=0∞γkraux(sT+k)]=raux(sT)+γ∑aTπ(aT|sT)∑sT+1p(sT+1|sT,aT)Wπ(sT+1).subscript𝑊𝜋subscript𝑠𝑇𝔼superscriptsubscript𝑘0superscript𝛾𝑘subscript𝑟auxsubscript𝑠𝑇𝑘subscript𝑟auxsubscript𝑠𝑇𝛾subscriptsubscript𝑎𝑇𝜋conditionalsubscript𝑎𝑇subscript𝑠𝑇subscriptsubscript𝑠𝑇1𝑝conditionalsubscript𝑠𝑇1subscript𝑠𝑇subscript𝑎𝑇subscript𝑊𝜋subscript𝑠𝑇1W\_{\pi}(s\_{T})=\operatorname\*{\mathbb{E}}\left[\sum\_{k=0}^{\infty}\gamma^{k}r\_{\text{aux}}(s\_{T+k})\right]=r\_{\text{aux}}(s\_{T})+\gamma\sum\_{a\_{T}}\pi(a\_{T}|s\_{T})\sum\_{s\_{T+1}}p(s\_{T+1}|s\_{T},a\_{T})W\_{\pi}(s\_{T+1}).italic\_W start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) = blackboard\_E [ ∑ start\_POSTSUBSCRIPT italic\_k = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T + italic\_k end\_POSTSUBSCRIPT ) ] = italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) + italic\_γ ∑ start\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_π ( italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) italic\_W start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT ) . | |
######
Definition 4 (Interference incentive).
There is an interference incentive if there exists a policy πintsubscript𝜋int\pi\_{\text{int}}italic\_π start\_POSTSUBSCRIPT int end\_POSTSUBSCRIPT such that Wπint(s0)>Wπ′(s0)subscript𝑊subscript𝜋intsubscript𝑠0subscript𝑊superscript𝜋′subscript𝑠0W\_{\pi\_{\text{int}}}(s\_{0})>W\_{\pi^{\prime}}(s\_{0})italic\_W start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT int end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) > italic\_W start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ).
The future task auxiliary reward will introduce an interference incentive unless the baseline policy is optimal for this auxiliary reward.
######
Example 1.
Consider a deterministic MDP with two states x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and two actions a and b, where x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the initial state. Suppose the baseline policy π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT always chooses action a.
x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPTx1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTaba,b
Figure 2: MDP for Example [1](#Thmexample1 "Example 1. ‣ 3 Interference incentives ‣ Avoiding Side Effects By Considering Future Tasks").
We show that for most future task distributions, the future task auxiliary reward induces an interference incentive: staying in x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT has a higher no-reward value than following the baseline policy (see Appendix [A](#A1 "Appendix A Example 1 ‣ Avoiding Side Effects By Considering Future Tasks")).
s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPTsTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPTsT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPTgisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPTcurrent taskfuture task i𝑖iitalic\_iagent policy π𝜋\piitalic\_πbaseline policy π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPTagent following πi\*superscriptsubscript𝜋𝑖\pi\_{i}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPTreference agent following πi\*superscriptsubscript𝜋𝑖\pi\_{i}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT
Figure 3: Future task approach with a baseline policy. The hypothetical agent runs on future task i𝑖iitalic\_i are shown in blue.
4 Future task approach with a baseline policy
----------------------------------------------
To avoid interference incentives, we modify the auxiliary reward given in Section [2](#S2 "2 Future task approach ‣ Avoiding Side Effects By Considering Future Tasks") so that it is maximized by following the baseline policy π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT: the agent receives full auxiliary reward if it does at least as well as the baseline policy on the future task (see Algorithm [2](#alg2 "Algorithm 2 ‣ 4 Future task approach with a baseline policy ‣ Avoiding Side Effects By Considering Future Tasks"), with modifications in red).
Suppose the agent is in state sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT after T𝑇Titalic\_T time steps on the current task. We run the baseline policy from s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT for the same number of steps, reaching state sT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT. Then a future task i𝑖iitalic\_i is sampled from F𝐹Fitalic\_F, and we hypothetically run two agents following πi\*superscriptsubscript𝜋𝑖\pi\_{i}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT in parallel:
our agent starting at sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT and a reference agent starting at sT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, both seeking the goal state gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. If one agent reaches gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT first, it stays in the goal state and waits for the other agent to catch up. Denoting the agent state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and the reference agent state st′subscriptsuperscript𝑠′𝑡s^{\prime}\_{t}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, we define ri(st,st′)=1subscript𝑟𝑖subscript𝑠𝑡subscriptsuperscript𝑠′𝑡1r\_{i}(s\_{t},s^{\prime}\_{t})=1italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = 1 if st=st′=gisubscript𝑠𝑡subscriptsuperscript𝑠′𝑡subscript𝑔𝑖s\_{t}=s^{\prime}\_{t}=g\_{i}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, and 00 otherwise, so risubscript𝑟𝑖r\_{i}italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT becomes a function of st′superscriptsubscript𝑠𝑡′s\_{t}^{\prime}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT. Thus, our agent only receives the reward for task i𝑖iitalic\_i if it has reached gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and the reference agent has reached it as well. The future task terminates when both agents have reached gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. We update the auxiliary reward from equation ([1](#S2.E1 "1 ‣ 2 Future task approach ‣ Avoiding Side Effects By Considering Future Tasks")) as follows:
| | | | |
| --- | --- | --- | --- |
| | raux(sT,sT′)=βD(sT)∑iF(i)Vi\*(sT,sT′)subscript𝑟auxsubscript𝑠𝑇subscriptsuperscript𝑠′𝑇𝛽𝐷subscript𝑠𝑇subscript𝑖𝐹𝑖subscriptsuperscript𝑉𝑖subscript𝑠𝑇subscriptsuperscript𝑠′𝑇r\_{\text{aux}}(s\_{T},s^{\prime}\_{T})=\beta D(s\_{T})\sum\_{i}F(i)V^{\*}\_{i}(s\_{T},s^{\prime}\_{T})italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) = italic\_β italic\_D ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_F ( italic\_i ) italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) | | (3) |
Here, we replace Vi\*(st)subscriptsuperscript𝑉𝑖subscript𝑠𝑡V^{\*}\_{i}(s\_{t})italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) given in equation ([2](#S2.E2 "2 ‣ 2 Future task approach ‣ Avoiding Side Effects By Considering Future Tasks")) with a value function Vi\*(st,st′)subscriptsuperscript𝑉𝑖subscript𝑠𝑡subscriptsuperscript𝑠′𝑡V^{\*}\_{i}(s\_{t},s^{\prime}\_{t})italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) that depends on the reference state st′subscriptsuperscript𝑠′𝑡s^{\prime}\_{t}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and satisfies the following conditions.
If st=st′=gisubscript𝑠𝑡subscriptsuperscript𝑠′𝑡subscript𝑔𝑖s\_{t}=s^{\prime}\_{t}=g\_{i}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, it satisfies the goal condition Vi\*(st,st′)=ri(st,st′)=1subscriptsuperscript𝑉𝑖subscript𝑠𝑡superscriptsubscript𝑠𝑡′subscript𝑟𝑖subscript𝑠𝑡superscriptsubscript𝑠𝑡′1V^{\*}\_{i}(s\_{t},s\_{t}^{\prime})=r\_{i}(s\_{t},s\_{t}^{\prime})=1italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = 1. Otherwise, it satisfies the following Bellman equation,
| | | | |
| --- | --- | --- | --- |
| | Vi\*(st,st′)=ri(st,st′)+γmaxat∈𝒜∑st+1∈𝒮p(st+1|st,at)∑st+1′∈𝒮p(st+1′|st′,at′)Vi\*(st+1,st+1′)subscriptsuperscript𝑉𝑖subscript𝑠𝑡superscriptsubscript𝑠𝑡′subscript𝑟𝑖subscript𝑠𝑡superscriptsubscript𝑠𝑡′𝛾subscriptsubscript𝑎𝑡𝒜subscriptsubscript𝑠𝑡1𝒮𝑝conditionalsubscript𝑠𝑡1subscript𝑠𝑡subscript𝑎𝑡subscriptsuperscriptsubscript𝑠𝑡1′𝒮𝑝conditionalsuperscriptsubscript𝑠𝑡1′superscriptsubscript𝑠𝑡′superscriptsubscript𝑎𝑡′subscriptsuperscript𝑉𝑖subscript𝑠𝑡1superscriptsubscript𝑠𝑡1′V^{\*}\_{i}(s\_{t},s\_{t}^{\prime})=r\_{i}(s\_{t},s\_{t}^{\prime})+\gamma\max\_{a\_{t}\in\mathcal{A}}\sum\_{s\_{t+1}\in\mathcal{S}}p(s\_{t+1}|s\_{t},a\_{t})\sum\_{s\_{t+1}^{\prime}\in\mathcal{S}}p(s\_{t+1}^{\prime}|s\_{t}^{\prime},a\_{t}^{\prime})V^{\*}\_{i}(s\_{t+1},s\_{t+1}^{\prime})italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_γ roman\_max start\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∈ caligraphic\_S end\_POSTSUBSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_S end\_POSTSUBSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | | (4) |
where at′superscriptsubscript𝑎𝑡′a\_{t}^{\prime}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is the action taken by πi\*superscriptsubscript𝜋𝑖\pi\_{i}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT in state st′subscriptsuperscript𝑠′𝑡s^{\prime}\_{t}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT.
We now provide a closed form for the value function and show it converges to the right values (see proof in Appendix [B.1](#A2.SS1 "B.1 Proof of Proposition 1 ‣ Appendix B Proofs ‣ Avoiding Side Effects By Considering Future Tasks")):
######
Proposition 1 (Value function convergence).
The following formula for the optimal value function satisfies the above goal condition and Bellman equation ([4](#S4.E4 "4 ‣ 4 Future task approach with a baseline policy ‣ Avoiding Side Effects By Considering Future Tasks")):
| | | |
| --- | --- | --- |
| | Vi\*(st,st′)=𝔼[γmax(Ni(st),Ni(st′))]=∑n=0∞ℙ(Ni(st)=n)∑n′=0∞ℙ(Ni(st′)=n′)γmax(n,n′)subscriptsuperscript𝑉𝑖subscript𝑠𝑡superscriptsubscript𝑠𝑡′𝔼superscript𝛾subscript𝑁𝑖subscript𝑠𝑡subscript𝑁𝑖superscriptsubscript𝑠𝑡′superscriptsubscript𝑛0ℙsubscript𝑁𝑖subscript𝑠𝑡𝑛superscriptsubscriptsuperscript𝑛′0ℙsubscript𝑁𝑖superscriptsubscript𝑠𝑡′superscript𝑛′superscript𝛾𝑛superscript𝑛′V^{\*}\_{i}(s\_{t},s\_{t}^{\prime})=\operatorname\*{\mathbb{E}}\left[\gamma^{\max(N\_{i}(s\_{t}),N\_{i}(s\_{t}^{\prime}))}\right]=\sum\_{n=0}^{\infty}\operatorname\*{\mathbb{P}}(N\_{i}(s\_{t})=n)\sum\_{n^{\prime}=0}^{\infty}\operatorname\*{\mathbb{P}}(N\_{i}(s\_{t}^{\prime})=n^{\prime})\gamma^{\max(n,n^{\prime})}italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = blackboard\_E [ italic\_γ start\_POSTSUPERSCRIPT roman\_max ( italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) end\_POSTSUPERSCRIPT ] = ∑ start\_POSTSUBSCRIPT italic\_n = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT blackboard\_P ( italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_n ) ∑ start\_POSTSUBSCRIPT italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT blackboard\_P ( italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_γ start\_POSTSUPERSCRIPT roman\_max ( italic\_n , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT | |
In the deterministic case, Vi\*(st,st′)=γmax(n,n′)=min(γn,γn′)=min(Vi\*(st),Vi\*(st′))subscriptsuperscript𝑉𝑖subscript𝑠𝑡subscriptsuperscript𝑠′𝑡superscript𝛾𝑛superscript𝑛′superscript𝛾𝑛superscript𝛾superscript𝑛′subscriptsuperscript𝑉𝑖subscript𝑠𝑡subscriptsuperscript𝑉𝑖subscriptsuperscript𝑠′𝑡V^{\*}\_{i}(s\_{t},s^{\prime}\_{t})=\gamma^{\max(n,n^{\prime})}=\min(\gamma^{n},\gamma^{n^{\prime}})=\min(V^{\*}\_{i}(s\_{t}),V^{\*}\_{i}(s^{\prime}\_{t}))italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_γ start\_POSTSUPERSCRIPT roman\_max ( italic\_n , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT = roman\_min ( italic\_γ start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT , italic\_γ start\_POSTSUPERSCRIPT italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT ) = roman\_min ( italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ), where n=Ni(st)𝑛subscript𝑁𝑖subscript𝑠𝑡n=N\_{i}(s\_{t})italic\_n = italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) and n′=Ni(st′)superscript𝑛′subscript𝑁𝑖subscriptsuperscript𝑠′𝑡n^{\prime}=N\_{i}(s^{\prime}\_{t})italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_N start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). In this case, the auxiliary reward produces the same incentives as the relative reachability penalty [Krakovna et al., [2019](#bib.bib10)], given by max(0,γn′−γn)=γn′−min(γn,γn′)0superscript𝛾superscript𝑛′superscript𝛾𝑛superscript𝛾superscript𝑛′superscript𝛾𝑛superscript𝛾superscript𝑛′\max(0,\gamma^{n^{\prime}}-\gamma^{n})=\gamma^{n^{\prime}}-\min(\gamma^{n},\gamma^{n^{\prime}})roman\_max ( 0 , italic\_γ start\_POSTSUPERSCRIPT italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT - italic\_γ start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ) = italic\_γ start\_POSTSUPERSCRIPT italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT - roman\_min ( italic\_γ start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT , italic\_γ start\_POSTSUPERSCRIPT italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT ).
We show that it avoids interference incentives (see proof in Appendix [B.2](#A2.SS2 "B.2 Proof of Proposition 2 ‣ Appendix B Proofs ‣ Avoiding Side Effects By Considering Future Tasks") and discussion of the stochastic case in Appendix [C](#A3 "Appendix C Interference in the stochastic case ‣ Avoiding Side Effects By Considering Future Tasks")):
######
Proposition 2 (Avoiding interference in the deterministic case).
For any policy π𝜋\piitalic\_π in a deterministic environment, the baseline policy π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT has the same or higher no-reward value: Wπ(s0)≤Wπ′(s0)subscript𝑊𝜋subscript𝑠0subscript𝑊superscript𝜋′subscript𝑠0W\_{\pi}(s\_{0})\leq W\_{\pi^{\prime}}(s\_{0})italic\_W start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ≤ italic\_W start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ).
Algorithm 1 Basic future task approach
1:function FTR(T,sT𝑇subscript𝑠𝑇T,s\_{T}italic\_T , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT)
2: Compute future task reward:
3: Draw future task i∼Fsimilar-to𝑖𝐹i\sim Fitalic\_i ∼ italic\_F
4: for t=T𝑡𝑇t=Titalic\_t = italic\_T to T+Tmax𝑇subscript𝑇maxT+T\_{\text{max}}italic\_T + italic\_T start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT do
5: if st=gisubscript𝑠𝑡subscript𝑔𝑖s\_{t}=g\_{i}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT then
6: return discounted reward γtsuperscript𝛾𝑡\gamma^{t}italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT
7: else
8: at∼πi\*(st)similar-tosubscript𝑎𝑡superscriptsubscript𝜋𝑖subscript𝑠𝑡a\_{t}\sim\pi\_{i}^{\*}(s\_{t})italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), st+1∼p(st,at)similar-tosubscript𝑠𝑡1𝑝subscript𝑠𝑡subscript𝑎𝑡s\_{t+1}\sim p(s\_{t},a\_{t})italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )
9: Goal state not reached:
10: return 0
11:
12:for T=0𝑇0T=0italic\_T = 0 to Tmaxsubscript𝑇maxT\_{\text{max}}italic\_T start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT do
13: Hypothetical interaction with future
14: tasks to estimate auxiliary reward:
15: R:=0assign𝑅0R:=0italic\_R := 0
16: for j=0𝑗0j=0italic\_j = 0 to Nsamplessubscript𝑁samplesN\_{\text{samples}}italic\_N start\_POSTSUBSCRIPT samples end\_POSTSUBSCRIPT do
17: R:=R+FTR(T,sT)assign𝑅𝑅FTR𝑇subscript𝑠𝑇R:=R+\textsc{FTR}(T,s\_{T})italic\_R := italic\_R + FTR ( italic\_T , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT )
18: D=1𝐷1D=1italic\_D = 1 if sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT is terminal else 1−γ1𝛾1-\gamma1 - italic\_γ
19: raux(sT):=βDR/Nsamplesassignsubscript𝑟auxsubscript𝑠𝑇𝛽𝐷𝑅subscript𝑁samplesr\_{\text{aux}}(s\_{T}):=\beta DR/N\_{\text{samples}}italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) := italic\_β italic\_D italic\_R / italic\_N start\_POSTSUBSCRIPT samples end\_POSTSUBSCRIPT
20: Interaction with the current task:
21: rT:=r(sT)+raux(sT)assignsubscript𝑟𝑇𝑟subscript𝑠𝑇subscript𝑟auxsubscript𝑠𝑇r\_{T}:=r(s\_{T})+r\_{\text{aux}}(s\_{T})italic\_r start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT := italic\_r ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) + italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT )
22: if sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT is terminal then
23: break
24: else
25: aT∼π(sT)similar-tosubscript𝑎𝑇𝜋subscript𝑠𝑇a\_{T}\sim\pi(s\_{T})italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ∼ italic\_π ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ), sT+1∼p(sT,aT)similar-tosubscript𝑠𝑇1𝑝subscript𝑠𝑇subscript𝑎𝑇s\_{T+1}\sim p(s\_{T},a\_{T})italic\_s start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT ∼ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT )
26:return agent trajectory s0,a0,r0,s1,…subscript𝑠0subscript𝑎0subscript𝑟0subscript𝑠1…s\_{0},a\_{0},r\_{0},s\_{1},\dotsitalic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , …
Algorithm 2 Future task approach with a baseline
1:function FTR(T,sT,sT′𝑇subscript𝑠𝑇subscriptsuperscript𝑠′𝑇T,s\_{T},{\color[rgb]{0.80078125,0,0}s^{\prime}\_{T}}italic\_T , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT)
2: Draw future task i∼Fsimilar-to𝑖𝐹i\sim Fitalic\_i ∼ italic\_F
3: for t=T𝑡𝑇t=Titalic\_t = italic\_T to T+Tmax𝑇subscript𝑇maxT+T\_{\text{max}}italic\_T + italic\_T start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT do
4: if st=st′=gisubscript𝑠𝑡subscriptsuperscript𝑠′𝑡subscript𝑔𝑖s\_{t}=s^{\prime}\_{t}=g\_{i}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT then
5: return discounted reward γtsuperscript𝛾𝑡\gamma^{t}italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT
6: if st≠gisubscript𝑠𝑡subscript𝑔𝑖s\_{t}\not=g\_{i}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ≠ italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT then
7: at∼πi\*(st)similar-tosubscript𝑎𝑡superscriptsubscript𝜋𝑖subscript𝑠𝑡a\_{t}\sim\pi\_{i}^{\*}(s\_{t})italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), st+1∼p(st,at)similar-tosubscript𝑠𝑡1𝑝subscript𝑠𝑡subscript𝑎𝑡s\_{t+1}\sim p(s\_{t},a\_{t})italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )
8: if st′≠gisubscriptsuperscript𝑠′𝑡subscript𝑔𝑖s^{\prime}\_{t}\not=g\_{i}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ≠ italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT then
9: at′∼πi\*(st′)similar-tosubscriptsuperscript𝑎′𝑡superscriptsubscript𝜋𝑖subscriptsuperscript𝑠′𝑡a^{\prime}\_{t}\sim\pi\_{i}^{\*}(s^{\prime}\_{t})italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), st+1′∼p(st′,at′)similar-tosubscriptsuperscript𝑠′𝑡1𝑝subscriptsuperscript𝑠′𝑡subscriptsuperscript𝑎′𝑡s^{\prime}\_{t+1}\sim p(s^{\prime}\_{t},a^{\prime}\_{t})italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_p ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )
10: return 0
11:
12:s0′:=s0assignsubscriptsuperscript𝑠′0subscript𝑠0s^{\prime}\_{0}:=s\_{0}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT := italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT
13:for T=0𝑇0T=0italic\_T = 0 to Tmaxsubscript𝑇maxT\_{\text{max}}italic\_T start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT do
14: Hypothetical future task interaction:
15: R:=0assign𝑅0R:=0italic\_R := 0
16: for j=0𝑗0j=0italic\_j = 0 to Nsamplessubscript𝑁samplesN\_{\text{samples}}italic\_N start\_POSTSUBSCRIPT samples end\_POSTSUBSCRIPT do
17: R:=R+FTR(T,sT,sT′)assign𝑅𝑅FTR𝑇subscript𝑠𝑇subscriptsuperscript𝑠′𝑇R:=R+\textsc{FTR}(T,s\_{T},s^{\prime}\_{T})italic\_R := italic\_R + FTR ( italic\_T , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT )
18: D=1𝐷1D=1italic\_D = 1 if sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT is terminal else 1−γ1𝛾1-\gamma1 - italic\_γ
19: raux(sT):=βDR/Nsamplesassignsubscript𝑟auxsubscript𝑠𝑇𝛽𝐷𝑅subscript𝑁samplesr\_{\text{aux}}(s\_{T}):=\beta DR/N\_{\text{samples}}italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) := italic\_β italic\_D italic\_R / italic\_N start\_POSTSUBSCRIPT samples end\_POSTSUBSCRIPT
20: Interaction with the current task:
21: rT:=r(sT)+raux(sT)assignsubscript𝑟𝑇𝑟subscript𝑠𝑇subscript𝑟auxsubscript𝑠𝑇r\_{T}:=r(s\_{T})+r\_{\text{aux}}(s\_{T})italic\_r start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT := italic\_r ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) + italic\_r start\_POSTSUBSCRIPT aux end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT )
22: if sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT is terminal then
23: break
24: else
25: aT∼π(sT)similar-tosubscript𝑎𝑇𝜋subscript𝑠𝑇a\_{T}\sim\pi(s\_{T})italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ∼ italic\_π ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ), sT+1∼p(sT,aT)similar-tosubscript𝑠𝑇1𝑝subscript𝑠𝑇subscript𝑎𝑇s\_{T+1}\sim p(s\_{T},a\_{T})italic\_s start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT ∼ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT )
26: aT′∼π′(sT′)similar-tosubscriptsuperscript𝑎′𝑇superscript𝜋′subscriptsuperscript𝑠′𝑇a^{\prime}\_{T}\sim\pi^{\prime}(s^{\prime}\_{T})italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ), sT+1′∼p(sT′,aT′)similar-tosubscriptsuperscript𝑠′𝑇1𝑝subscriptsuperscript𝑠′𝑇subscriptsuperscript𝑎′𝑇s^{\prime}\_{T+1}\sim p(s^{\prime}\_{T},a^{\prime}\_{T})italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT ∼ italic\_p ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT )
27:return agent trajectory s0,a0,r0,s1,…subscript𝑠0subscript𝑎0subscript𝑟0subscript𝑠1…s\_{0},a\_{0},r\_{0},s\_{1},\dotsitalic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , …
Role of the baseline policy. The baseline policy is intended to represent what happens by default, rather than a safe course of action or an effective strategy for achieving a goal (so the baseline is task-independent). While a default course of action (such as doing nothing) can have bad outcomes, the agent does not cause these outcomes, so they don’t count as side effects of the agent’s actions.
The role of the baseline policy is to filter out these outcomes that are not caused by the agent, in order to avoid interference incentives.
In many settings it may not be obvious how to set the baseline policy. For example, Armstrong and Levinstein [[2017](#bib.bib3)] define doing nothing as equivalent to switching off the agent, which is not straightforward to represent as a policy in environments without a given noop action.
The question of choosing a baseline policy is outside the scope of this work, which assumes that this policy is given, but we look forward to future work addressing this point.
5 Key differences from related approaches
------------------------------------------
The future task approach is similar to relative reachability [Krakovna et al., [2019](#bib.bib10)] and attainable utility [Turner et al., [2020a](#bib.bib24)].
These approaches provided an intuitive definition of side effects in terms of the available options in the environment, an intuitive concept of interference, and somewhat ad-hoc auxiliary rewards that work well in practice on gridworld environments [Turner et al., [2020b](#bib.bib25)]. We follow a more principled approach to create some needed theoretical grounding for the side effects problem by deriving an optionality-based auxiliary reward from simple assumptions and a formal definition of interference.
The above approaches use a baseline policy in a *stepwise* manner, applying it to the previous state sT−1subscript𝑠𝑇1s\_{T-1}italic\_s start\_POSTSUBSCRIPT italic\_T - 1 end\_POSTSUBSCRIPT (sT′=sTstepsubscriptsuperscript𝑠′𝑇superscriptsubscript𝑠𝑇steps^{\prime}\_{T}=s\_{T}^{\text{step}}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT = italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT step end\_POSTSUPERSCRIPT), while the future task approach runs the baseline policy from the beginning of the episode (sT′=sTinitsubscriptsuperscript𝑠′𝑇superscriptsubscript𝑠𝑇inits^{\prime}\_{T}=s\_{T}^{\text{init}}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT = italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT init end\_POSTSUPERSCRIPT). We refer to these two options as *stepwise mode* and *initial mode*, shown in Figure [4](#S5.F4 "Figure 4 ‣ 5 Key differences from related approaches ‣ Avoiding Side Effects By Considering Future Tasks"). We will show that the stepwise mode can result in failure to avoid delayed side effects.
By default, an auxiliary reward using the stepwise mode does not penalize delayed side effects. For example, if the agent drops a vase from a high-rise building, then by the time the vase reaches the ground and breaks, the broken vase will be the default outcome. Thus, the stepwise mode is usually used in conjunction with *inaction rollouts* [Turner et al., [2020a](#bib.bib24)] in order to penalize delayed side effects. An inaction rollout uses an environment model to roll out the baseline policy into the future. Inaction rollouts from sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT or sT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT are compared to identify delayed effects of the agent’s actions (see Appendix [D.1](#A4.SS1 "D.1 Inaction rollouts ‣ Appendix D Stepwise application of the baseline policy ‣ Avoiding Side Effects By Considering Future Tasks")).
While inaction rollouts are useful for penalizing delayed side effects, we will demonstrate that they miss some of these effects. In particular, if the task requires an action that has a delayed side effect, then the stepwise mode will give the agent no incentive to undo the delayed effect after the action is taken. We illustrate this with a toy example.
s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPTsT−1subscript𝑠𝑇1s\_{T-1}italic\_s start\_POSTSUBSCRIPT italic\_T - 1 end\_POSTSUBSCRIPTsTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPTsTstepsuperscriptsubscript𝑠𝑇steps\_{T}^{\text{step}}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT step end\_POSTSUPERSCRIPTsTinitsuperscriptsubscript𝑠𝑇inits\_{T}^{\text{init}}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT init end\_POSTSUPERSCRIPTπ𝜋\piitalic\_ππ𝜋\piitalic\_ππ′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPTπ′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT
Figure 4: The initial mode (used in the future task approach) produces the baseline state sTinitsubscriptsuperscript𝑠init𝑇s^{\text{init}}\_{T}italic\_s start\_POSTSUPERSCRIPT init end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, while the stepwise mode produces the baseline state sTstepsubscriptsuperscript𝑠step𝑇s^{\text{step}}\_{T}italic\_s start\_POSTSUPERSCRIPT step end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT.
xHsubscript𝑥𝐻x\_{H}italic\_x start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPTxOVsubscript𝑥𝑂𝑉x\_{OV}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_V end\_POSTSUBSCRIPTxSBsubscript𝑥𝑆𝐵x\_{SB}italic\_x start\_POSTSUBSCRIPT italic\_S italic\_B end\_POSTSUBSCRIPTxOBsubscript𝑥𝑂𝐵x\_{OB}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_B end\_POSTSUBSCRIPTxCVsubscript𝑥𝐶𝑉x\_{CV}italic\_x start\_POSTSUBSCRIPT italic\_C italic\_V end\_POSTSUBSCRIPTxSVsubscript𝑥𝑆𝑉x\_{SV}italic\_x start\_POSTSUBSCRIPT italic\_S italic\_V end\_POSTSUBSCRIPTopen doorgo to storenoopclose doorgo to storenoopnoopnoop
Figure 5: MDP for Example [2](#Thmexample2 "Example 2 (Door). ‣ 5 Key differences from related approaches ‣ Avoiding Side Effects By Considering Future Tasks").
States: xHsubscript𝑥𝐻x\_{H}italic\_x start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT - agent at the house, xOVsubscript𝑥𝑂𝑉x\_{OV}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_V end\_POSTSUBSCRIPT - agent outside the house with door open and vase intact, xOBsubscript𝑥𝑂𝐵x\_{OB}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_B end\_POSTSUBSCRIPT - agent outside the house with door open and vase broken, xSBsubscript𝑥𝑆𝐵x\_{SB}italic\_x start\_POSTSUBSCRIPT italic\_S italic\_B end\_POSTSUBSCRIPT (terminal) - agent at the store with vase broken, xCVsubscript𝑥𝐶𝑉x\_{CV}italic\_x start\_POSTSUBSCRIPT italic\_C italic\_V end\_POSTSUBSCRIPT - agent outside the house with door closed and vase intact, xSVsubscript𝑥𝑆𝑉x\_{SV}italic\_x start\_POSTSUBSCRIPT italic\_S italic\_V end\_POSTSUBSCRIPT (terminal) - agent at the store with vase intact.
######
Example 2 (Door).
Consider the MDP shown in Figure [5](#S5.F5 "Figure 5 ‣ 5 Key differences from related approaches ‣ Avoiding Side Effects By Considering Future Tasks"), where the baseline policy takes noop actions. The agent starts at the house (xHsubscript𝑥𝐻x\_{H}italic\_x start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT) and its task is to go to the store. To leave the house, the agent needs to open the door (xOVsubscript𝑥𝑂𝑉x\_{OV}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_V end\_POSTSUBSCRIPT). The baseline policy in xOVsubscript𝑥𝑂𝑉x\_{OV}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_V end\_POSTSUBSCRIPT leaves the door open, which leads to the wind knocking over a vase in the house (xOBsubscript𝑥𝑂𝐵x\_{OB}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_B end\_POSTSUBSCRIPT). To avoid this, the agent needs to deviate from the baseline policy by closing the door (xCVsubscript𝑥𝐶𝑉x\_{CV}italic\_x start\_POSTSUBSCRIPT italic\_C italic\_V end\_POSTSUBSCRIPT).
The stepwise mode will incentivize the agent to leave the door open and go to xSBsubscript𝑥𝑆𝐵x\_{SB}italic\_x start\_POSTSUBSCRIPT italic\_S italic\_B end\_POSTSUBSCRIPT. The inaction rollout at xHsubscript𝑥𝐻x\_{H}italic\_x start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT penalizes the agent for the predicted delayed effect of breaking the vase when it opens the door to go to xOVsubscript𝑥𝑂𝑉x\_{OV}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_V end\_POSTSUBSCRIPT. The agent receives this penalty whether or not it leaves the door open. Once the agent has reached xOVsubscript𝑥𝑂𝑉x\_{OV}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_V end\_POSTSUBSCRIPT, the broken vase becomes the default outcome in xOBsubscript𝑥𝑂𝐵x\_{OB}italic\_x start\_POSTSUBSCRIPT italic\_O italic\_B end\_POSTSUBSCRIPT, so the agent is not penalized. Thus, the stepwise mode gives the agent no incentive to avoid leaving the door open, while the initial mode compares to sT′=xHsubscriptsuperscript𝑠′𝑇subscript𝑥𝐻s^{\prime}\_{T}=x\_{H}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT = italic\_x start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT (where the vase is intact) and thus gives an incentive to close the door.
Thus, we recommend applying the baseline policy in the initial mode rather than the stepwise mode, in order to reliably avoid delayed side effects. We discuss further considerations on this choice in Appendix [D.2](#A4.SS2 "D.2 Offsetting incentives ‣ Appendix D Stepwise application of the baseline policy ‣ Avoiding Side Effects By Considering Future Tasks").
6 Experiments
--------------
###
6.1 Environments
We use gridworld environments shown in Figure [6](#S6.F6 "Figure 6 ‣ 6.1 Environments ‣ 6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks") to test for interference (Sushi) and side effects (Vase, Box, and Soko-coin). These simple environments clearly illustrate the desirable and undesirable behaviors, which would be more difficult to isolate in more complex environments. In all environments, the agent can go in the 4 directions or take a noop (stay put), and receives a reward of 1 for reaching a goal state (e.g. collecting a coin).
Sushi (Figure [5(a)](#S6.F5.sf1 "5(a) ‣ Figure 6 ‣ 6.1 Environments ‣ 6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks")). This environment [Krakovna et al., [2019](#bib.bib10)] is a conveyor belt sushi restaurant, with a conveyor belt that moves to the right by one square after every agent action. There is a sushi dish on the belt that is eaten by a human if it reaches the end of the belt. The interference behavior is to move the sushi off the belt. The desired behavior is to take the fastest path to the goal, which does not interfere with the sushi. The undesired behavior is to take a longer path to the goal that interferes with the sushi.
Vase (Figure [5(b)](#S6.F5.sf2 "5(b) ‣ Figure 6 ‣ 6.1 Environments ‣ 6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks")). In this environment [Leech et al., [2018](#bib.bib12)], there is a vase on the shortest path to the goal, so the fastest path involves a side effect (colliding with the vase). The desired behavior is to take a longer path to the goal around the vase.
Box (Figure [5(c)](#S6.F5.sf3 "5(c) ‣ Figure 6 ‣ 6.1 Environments ‣ 6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks")). This environment [Leike et al., [2017](#bib.bib13)] contains a box that needs to be pushed out of the way for the agent to reach the goal. The undesired behavior is taking the shortest path to the goal, which involves pushing the box down into a corner (an irrecoverable position). The desired behavior is to take a slightly longer path in order to push the box to the right. Both of these paths require irreversible actions: if the box is moved to the right, the agent can move it back, but then the agent ends up on the other side of the box, so the starting state is unreachable.
Soko-coin (Figures [5(d)](#S6.F5.sf4 "5(d) ‣ Figure 6 ‣ 6.1 Environments ‣ 6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks"), [5(e)](#S6.F5.sf5 "5(e) ‣ Figure 6 ‣ 6.1 Environments ‣ 6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks")). We modified the classic Sokoban environment to include possible side effects. These Sokoban variants contain N𝑁Nitalic\_N boxes and N𝑁Nitalic\_N coins for N=2,3𝑁23N=2,3italic\_N = 2 , 3, and the agent’s goal is to collect all the coins (with a reward of 1 each). The boxes are obstacles on the way to the coins, and the agent can put them in irreversible positions in order to get to the coins faster, which is a side effect. The coins disappear when they are collected, so irreversible actions are required to complete the task.

(a) Sushi

(b) Vase

(c) Box

(d) Soko-coin-2

(e) Soko-coin-3

Figure 6: Gridworld environments.
###
6.2 Setup
We compare the following approaches: no auxiliary reward, reversibility reward, and future task reward with and without a baseline policy. For each approach, we run a Q-learning agent that learns the auxiliary reward as it explores the environment.
We approximate the future task auxiliary reward using a sample of 10101010 possible future tasks.
We approximate the baseline policy by sampling from the agent’s experience of the outcome of the noop action, and assuming the state stays the same in states where the agent has not taken a noop yet. This gives similar results on our environments as using an exact baseline policy computed using a model of the noop action.
We compare an exact implementation of the future task auxiliary reward with a scalable UVFA approximation [Schaul et al., [2015](#bib.bib20)].
The UVFA network computes the value function given a goal state (corresponding to a future task). It consists of two sub-networks, an origin network and a goal network, taking as input the starting and goal states respectively, with each subnetwork computing a representation of its input state. The overall value is then computed by taking a dot product of the representations and applying a sigmoid function.
The two networks have one hidden layer of size 30303030 and an output layer of size 5555. This configuration was chosen using a hyperparameter search over number of layers (1,2,31231,2,31 , 2 , 3), hidden layer size (10,30,50,100,20010305010020010,30,50,100,20010 , 30 , 50 , 100 , 200), and representation size (5,10,50510505,10,505 , 10 , 50).
For each transition (x,a,y)𝑥𝑎𝑦(x,a,y)( italic\_x , italic\_a , italic\_y ) from state x𝑥xitalic\_x to state y𝑦yitalic\_y, we perform a Bellman update of the value function estimate Vssubscript𝑉𝑠V\_{s}italic\_V start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT for a random sample of 10101010 goal states s𝑠sitalic\_s, with the following loss function: ∑s[γmaxb∈𝒜Vs(y,b)−Vs(x,a)]subscript𝑠delimited-[]𝛾subscript𝑏𝒜subscript𝑉𝑠𝑦𝑏subscript𝑉𝑠𝑥𝑎\sum\_{s}[\gamma\max\_{b\in\mathcal{A}}V\_{s}(y,b)-V\_{s}(x,a)]∑ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT [ italic\_γ roman\_max start\_POSTSUBSCRIPT italic\_b ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_V start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_y , italic\_b ) - italic\_V start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ].
We sample the 10101010 goal states from a stored set of 100 states encountered by the agent. Whenever a state is encountered that is not in the stored set, it randomly replaces a stored state with probability 0.010.010.010.01.
Exact and UVFA agents have discount rates of 0.990.990.990.99 and 0.950.950.950.95 respectively. For each agent, we do a grid search over the scaling parameter β𝛽\betaitalic\_β (0.3,1,3,10,30,100,300,10000.313103010030010000.3,1,3,10,30,100,300,10000.3 , 1 , 3 , 10 , 30 , 100 , 300 , 1000), choosing the highest value of β𝛽\betaitalic\_β that allows the agent to receive full reward on the current task for exact agents (and at least 90%percent9090\%90 % of the full reward for UVFA agents). We anneal the exploration rate linearly from 1 to 0, and keep it at 0 for the last 1000100010001000 episodes. We run the agents for 50K episodes in all environments except Soko-coin, where we run exact agents for 1M episodes and UVFA agents for 100K episodes. The runtime of the UVFA agent (in seconds per episode) was 0.20.20.20.2 on the Soko-coin environments. Only the UVFA approximation was feasible on Soko-coin-3, since the exact method runs out of memory.
###
6.3 Results
Table 1: Results on the gridworld environments for Q-learning with no auxiliary reward (None), reversibility reward, and future task (FT) reward (exact or UVFA, with or without a baseline policy). The average number of interference behaviors per episode is shown for the Sushi environment, and the average number of side effects per episode is shown for the other environments (with high levels in red and low levels in blue). The results are averaged over the last 1000 episodes, over 10 random seeds for exact agents and 50 random seeds for UVFA agents.
| | | |
| --- | --- | --- |
| | Interference | Side effects |
| Auxiliary reward | Sushi | Vase | Box | Soko-coin-2 | Soko-coin-3 |
| None | 0 | 1 | 1 | 2 | 3 |
| Reversibility | 0 | 0 | 1 | 2 | 3 |
| FT (no baseline, exact) | 1 | 0 | 0 | 0 | |
| FT (baseline, exact) | 0 | 0 | 0 | 0 | |
| FT (no baseline, UVFA) | 0.64 ±plus-or-minus\pm± 0.07 | 0.12 ±plus-or-minus\pm± 0.05 | 0.22 ±plus-or-minus\pm± 0.06 | 0.53 ±plus-or-minus\pm± 0.09 | 1.04 ±plus-or-minus\pm± 0.12 |
| FT (baseline, UVFA) | 0.05 ±plus-or-minus\pm± 0.01 | 0.12 ±plus-or-minus\pm± 0.05 | 0.22 ±plus-or-minus\pm± 0.06 | 0.53 ±plus-or-minus\pm± 0.09 | 1.04 ±plus-or-minus\pm± 0.12 |
Sushi.
The agent with no auxiliary reward has no incentive to interfere with the sushi and goes directly to the goal. Since the starting state is unreachable no matter what the agent does, the reversibility reward is always 0, so it does not produce interference behavior. The future task agent with no baseline interferes with the sushi, while the agent with a baseline goes directly to the goal.
Vase.
Since the agent can get to the goal without irreversible actions, both the reversibility and future task methods avoid the side effect on this environment, while the regular agent breaks the vase.
Box.
The reversibility agent loses its auxiliary reward no matter how it moves the box, so it takes the fastest path to the goal that pushes the box in the corner (similarly to the regular agent). However, the future task agent pushes the box to the right, since some future tasks involve moving the box.
Soko-coin.
Since the reversibility agent loses its auxiliary reward by collecting coins, it pushes boxes next to walls to get to the coins faster (similarly to the regular agent). However, the future task agent goes around the boxes to preserve future tasks that involve moving boxes.
The results are shown in Table [1](#S6.T1 "Table 1 ‣ 6.3 Results ‣ 6 Experiments ‣ Avoiding Side Effects By Considering Future Tasks"). Each exact Q-learning agent converged to the optimal policy given by value iteration for the corresponding auxiliary reward. Only the future task approach with the baseline policy does well on all environments, avoiding side effects as well as interference. While the UVFA approximation of the future task auxiliary reward avoids side effects less reliably than the exact version, it shows some promise for scaling up the future task approach.
7 Other related work
---------------------
Side effects criteria using state features. Minimax-regret querying Zhang et al. [[2018](#bib.bib26)] assumes a factored MDP where the agent is allowed to change some of the features and proposes a criterion for querying the supervisor about changing other features in order to allow for intended effects. RLSP Shah et al. [[2019](#bib.bib21)] defines an auxiliary reward for avoiding side effects in terms of state features by assuming that the starting state of the environment is already organized according to human preferences. While these approaches are promising, they require a set of state features in order to compute the auxiliary reward, which increases the burden on the reward designer.
Empowerment.
The future task approach is related to *empowerment* [Klyubin et al., [2005](#bib.bib9), Salge et al., [2014](#bib.bib18)], a measure of the agent’s control over its environment. Empowerment is defined as the maximal mutual information between the agent’s actions and the future state, and thus measures the agent’s ability to reliably reach many states. Maximizing empowerment would encourage the agent to avoid irreversible side effects, but would also incentivize interference, and it is unclear to us how to define an empowerment-based measure that would avoid this. One possibility is to penalize the reduction in empowerment between the current state sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT and the baseline sT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT. However, empowerment is indifferent between these two cases: A) the same states are reachable from sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT and sT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, and B) a state x𝑥xitalic\_x is reachable from sT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT but not from sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, while another state y𝑦yitalic\_y is reachable from sTsubscript𝑠𝑇s\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT but not from sT′subscriptsuperscript𝑠′𝑇s^{\prime}\_{T}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT. Thus, penalizing reduction in empowerment would miss some side effects: e.g. if the agent replaced the sushi on the conveyor belt with a vase, empowerment could remain the same, so the agent is not penalized for breaking the vase.
Safe exploration. While the safe exploration problem may seem similar to the side effects problem, safe exploration is about avoiding harmful actions during the training process (a learning problem), while the side effects problem is about removing the incentive to take harmful actions (a reward design problem). Many safe exploration methods work by changing the agent’s incentives, and thus can potentially address the side effects problem. This includes reversibility methods [Eysenbach et al., [2017](#bib.bib6)], which avoid side effects in tasks that don’t require irreversible actions. Safe exploration methods that penalize risk [Chow et al., [2015](#bib.bib4)] or use intrinsic motivation [Lipton et al., [2016](#bib.bib14)] help the agent avoid side effects that result in lower reward (such as getting trapped or damaged), but do not discourage the agent from damaging the environment in ways that are not penalized by the reward function (e.g. breaking vases). Thus, safe exploration methods offer incomplete solutions to side effects, just as side effects methods provide incomplete solutions to safe exploration. These methods can be combined if desired to address both problems.
Uncertainty about the objective. Inverse Reward Design [Hadfield-Menell et al., [2017](#bib.bib8)] incorporates uncertainty about the objective by considering alternative reward functions that are consistent with the given reward function in the training environment, and following a risk-averse policy.
This helps avoid side effects that stem from distributional shift, where the agent encounters a new state that was not seen during training. However, along with avoiding harmful new states, the agent also avoids beneficial new states.
Another uncertainty method is quantilization [Taylor, [2016](#bib.bib23)], which incorporates uncertainty by sampling from the top quantile of actions rather than taking the optimal action. This approach does not consistently remove the incentive for side effects, since harmful actions will still be sampled some of the time.
Human oversight. An alternative to specifying an auxiliary reward is to teach the agent to avoid side effects through human oversight, such as inverse reinforcement learning [Ng and Russell, [2000](#bib.bib17), Hadfield-Menell et al., [2016](#bib.bib7)], demonstrations [Abbeel and Ng, [2004](#bib.bib1)], or human feedback [Christiano et al., [2017](#bib.bib5), Saunders et al., [2017](#bib.bib19)].
It is unclear how well an agent can learn a reward for avoiding side effects from human oversight. We expect this to depend on the diversity of settings in which it receives oversight and its ability to generalize from those settings, while an intrinsic reward for avoiding side effects would be more robust and reliable. Such an auxiliary reward could also be combined with human oversight to decrease the amount of human input required for an agent to learn human preferences, e.g. if used as a prior for the learned reward function.
8 Conclusions
--------------
To address the challenge of defining what side effects are, we have proposed a approach where a definition of side effects is automatically implied by the simpler definition of future goals, which lays a theoretical foundation for formalizing the side effects problem.
This approach provides an auxiliary reward for preserving the ability to perform future tasks that incentivizes the agent to avoid side effects, whether or not the current task requires irreversible actions, and does not introduce interference incentives for the agent.
There are many possible directions for follow-up work, which include improving the UVFA approximation of the future task reward to more reliably avoid side effects, applying the method to more complex agents and environments, generalizing interference avoidance to the stochastic case, investigating the choice of future task distribution F𝐹Fitalic\_F
(e.g. incorporating human preferences by learning the task distribution through human feedback methods [Christiano et al., [2017](#bib.bib5)]), and investigating other possible undesirable incentives that could be introduced besides interference incentives.
Broader impact
--------------
In present-day reinforcement learning, what the agent should not do is usually specified manually, e.g. through constraints or negative rewards. This ad-hoc approach is unlikely to scale to more advanced AI systems in more complex environments. Ad-hoc specifications are usually incomplete, and more capable AI systems will be better at finding and exploiting gaps and loopholes in the specification.
We already see many examples of specification gaming with present-day AI systems, and this problem is likely to get worse for more capable AI systems [Krakovna et al., [2020](#bib.bib11)].
We think that building and deploying more advanced AI systems calls for general approaches and design principles for specifying agent objectives. Our paper makes progress on developing such a general principle, which aims to capture the heuristic of “do no harm” in terms of the available options in the environment, and gives the agent an incentive to consider the future consequences of its actions beyond the current task.
Without a reliable and principled way to avoid unnecessary changes to the world, the deployment of AI systems will be limited to narrow domains where the designer can enumerate everything the agent should not do. Thus, general approaches to objective specification would enable society to reap the benefits of applying capable AI systems to more difficult problems, which has potential for high long-term impact.
In terms of negative impacts, adding an auxiliary reward for future tasks increases the computational requirements and thus the energy cost of training reinforcement learning algorithms, compared to hand-designed rewards and constraints for avoiding side effects. The remaining gaps in the theoretical foundations of our method could lead to unexpected issues if they are not researched properly and instead left to empirical evaluation.
Acknowledgements
----------------
We thank Ramana Kumar for detailed and constructive feedback on paper drafts and code. We also thank Jonathan Uesato, Matthew Rahtz, Alexander Turner, Carroll Wainwright, Stuart Armstrong, and Rohin Shah for helpful feedback on drafts.
|
2ea72919-4876-4bfa-a798-63f974e96590
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Clustering and the efficient use of cognitive resources..
JournalofMathematicalPsychology109(2022)102675
Contents lists available at ScienceDirect
JournalofMathematicalPsychology
journal homepage: www.elsevier.com/locate/jmp
TheoreticalNote
Clusteringandtheefficientuseofcognitiveresources
IshitaDasguptaa,∗,ThomasL.Griffithsb
aDepartmentofComputerScience,PrincetonUniversity,UnitedStatesofAmerica
bDepartmentsofPsychologyandComputerScience,PrincetonUniversity,UnitedStatesofAmerica
a r t i c l e i n f o
Articlehistory:
Received17November2021
Receivedinrevisedform29April2022
Accepted10May2022
Availableonline xxxx
Keywords:
Bayesianinference
Resourcerationality
Informationtheory
Probabilisticnumericsa b s t r a c t
Acentralcomponentofhumanintelligenceistheabilitytomakeabstractions,toglossoversome
detailsinfavorofdrawingouthigher-orderstructure.Clusteringstimulitogetherisaclassicexample
ofthis.However,thecrucialquestionremainsofhowone shouldmaketheseabstractions—whatdetails
toretainandwhattothrowaway?Howmanyclusterstoform?Weprovideananalysisofhowa
rationalagentwithlimitedcognitiveresourcesshouldapproachthisproblem,consideringnotonly
howwellaclusteringfitsthedatabutalsobyhow‘complex’itis,i.e.howcognitivelyexpensive
itistorepresent.Weshowthatthesolutiontothisproblemprovidesawaytoreinterpretawide
rangeofpsychologicalmodelsthatarebasedonprinciplesfromnon-parametricBayesianstatistics.
Inparticular,weshowthattheChineseRestaurantProcessprior,ubiquitousinrationalmodelsof
humanandanimalclusteringbehavior,canbeinterpretedasminimizinganintuitiveformulationof
representationalcomplexity.
©2022ElsevierInc.Allrightsreserved.
1. Introduction
Theonlyenduringaspectofourenvironmentisthatnothing
staysthesame.Weneverhave exactlythesameexperiencetwice.
As a consequence, the human mind has to form abstractions,
clustering these experiences together in a way that supports
generalization. Psychological models have applied this lens to
phenomenaasdiverseascategorization(Anderson,1991;San-
bornetal.,2010),featurelearning(Griffiths&Austerweil,2008),
theoryformation(Kempetal.,2010),classicalconditioning(Ger-
shmanetal.,2010),andwordsegmentation(Goldwateretal.,
2009).Akeyproblemthatarisesineachofthesemodelsisknow-
ingwhentogenerateanewcluster—whenanobject,stimulus,or
wordisgenuinelyofakindthathasneverbeenseenbefore.
Decidingwhentoformanewclusterinvolvesmakingatrade-
offbetweenthecomplexityoftheunderlyingrepresentationand
howwellitdescribestheenvironment.Groupingallexperiences
intoasingleclusterwheretheyarerepresentedbysomeabstract
summarystatisticsismaximallysimple,butatthecostoflosing
asignificantamountofdetail.Havingaseparateclusterforeach
experienceaccuratelycapturesthenuancesofthoseexperiences,
butismaximallycomplex.Sohow shouldweformclusters?
Inthispaper,weaddressthisquestioninthespiritofrational
analysis(Anderson,1991),askinghowitmightbesolvedbyan
idealagent.Moreprecisely,weengageinresourcerationalanaly-
sis(Gershmanetal.,2015;Griffithsetal.,2015;Lieder&Griffiths,
∗Correspondenceto:DeepMindNewYorkCity,USA.
E-mailaddress: [email protected](I.Dasgupta).2019),sinceouranalysisfocusesonthequestionofhowthat
agentmightmakethebestuseoflimitedcognitiveresources.For-
malizingthecomplexityofaclusteringininformation-theoretic
terms,wederiveanoptimaldistributionoverclusterings.
Thisanalysisyieldsasurprisingresult:ouroptimalsolution
hasthesamepropertiesasthedistributionoverclusteringsas-
sumedinallofthepsychologicalmodelsmentionedabove.These
modelsuseadistributionoverclusteringsoriginallyintroducedin
psychologybyAnderson(1991)inhisrationalmodelofcatego-
rization.Thisdistributionwasindependentlydiscoveredinnon-
parametricBayesianstatistics(Aldous,1985;Hjortetal.,2010),
whereitisknownastheChineserestaurantprocess(CRP).1
TheCRPhasanumberofattractivemathematicalproperties
thatcanbeusedtojustifyitsuseinpsychologicalmodels,related
toitsconvenienceortheassumptionsitmakesabouttheenvi-
ronment.OuranalysisprovidesanewreasonwhytheCRPmight
makesenseasacomponentofpsychologicalmodels:weshow
thatCRP-likedistributionscanarisefromanefforttominimize
representationalcosts,i.e.thatthisdistributionisnormativeun-
deranassumptionofresource-rationality.Inparticular,weshow
thatbestusingafixednumberofbitstostoreanobject-category
mapping(viz.controllingthecomplexityofthatmapping)can
1The name of this process comes from its creators imagining a large
restaurantthatseatsmultiplepartiesatcommunaltables,withpeoplejoining
tablesbasedontheircurrentpopularity–aphenomenonthatcouldapparently
beobservedinSanFrancisco’sChinatown.Thetablesareclustersandthepeople
theexperiencesbeingclustered.
https://doi.org/10.1016/j.jmp.2022.102675
0022-2496/ ©2022ElsevierInc. Allrightsreserved.
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
Fig. 1.SchematicillustrationofthecategorizationtasksimulatedinAnderson(1991).Inthisfigure,werepresenteachobjectwithitsbinaryfeaturesandeach
clusterasabucket.Eachtimeanobjectisadded,wecaneitherassignittoanexistingcluster,orcreateanewone.Solidarrowsindicatethehigherprobability
assignment,Anderson(1991)assumesthatthisistheassignmentchosenateachstep.Iteratingthisoverthe6stimuliintheir(fixed)orderofpresentationgives
thefinalclustering.Seemaintextforfurtherdetails.
giveCRP-likebehavior.Theseresultsprovidethefirstprocess-
levelexplanationofwhythiskindofclusteringbehaviormight
beareasonableassumptioninpsychologicalmodels.
Theplanofthepaperisasfollows.Inthenextsectionwe
provideamoredetailedintroductiontotheChineserestaurant
process.Wethenturntoouranalysisofoptimalclusteringunder
resourceconstraints.Wederiveourkeyresultsmathematically
andpresentsimulationsthatverifyouranalysis.Weconclude
withadiscussionoftheimplicationsoftheseresultsfordevel-
opingmodelsofhumancognition.
2. Background: The Chinese restaurant process
Asmentionedabove,oneofthechallengesinvolvedinclus-
teringasetofexperiencesisdecidinghowmanyclustersthere
shouldbe.ResearchersinnonparametricBayesianstatisticsde-
velopedaninnovativestrategyforsolvingthisproblem:rather
thanspecifyingaparticularnumberofclusters,weinsteadas-
sume that there could exist an infinite number of clusters of
whichonlyafinitenumberhavebeenobservedsofar.Theprob-
lemofdeterminingthenumberofclustersthenbecomesamatter
ofinferringhowmanyclustersmayhavebeenobserved,which
canbesolvedbyapplyingBayes’rule.
Pursuingthisapproachrequiresidentifyingapriordistribu-
tion over clusterings that remains well-defined regardless of
howmanyexperiencesneedtobeclustered.Acommonwayto
achievethisgoalistoassumethatthepriorprobabilityanitem
belongstoaclusterfollowsadistributionknownastheChinese
Restaurantprocess(CRP;Aldous,1985).Underthisdistribution,
theprobabilityofbelongingtoanexistingclusterisproportional
tothenumberofobjectsalreadyinthatcluster,whiletheproba-
bilityofanewclusterisproportionaltothevalueofaparameter
α.
TomakethiskindofBayesianclusteringmoreconcrete,we
considerhowitmightapplytoanempiricalcontext:thefirstex-
perimentfromMedinandSchaffer(1978).Here,participantssaw
andwereaskedtocategorizestimulithatvariedalongfourbinary
dimensions(color,form,shapeandposition)andgivenabinary
categorylabel.Thesedatawereusedtodemonstratetherational
modelofcategorizationpresentedinAnderson(1991)(illustrated
inFig.1),whichassumesaprioronclustersthatisequivalent
totheCRP.Andersonassumedthatthebinarycategorylabelis
treated as an additional feature, so there are five binary fea-
tures.Thestimuliarethenassumedtobepresentedintheorder
{11111 ,10101 ,01011 ,00000 ,01000 ,10110 }. Anderson’s model
assignsthefirststimulustoitsowncluster.Onseeingthesecond
stimulus,itdecideswhethertoassignitanewclusterortothe
sameclusterasthefirstone.Itmakesthisdecisiononthebasis
oftwofactors:thenewstimulus’overlapwiththefeaturesof
thestimulialreadyinacluster(thelikelihoodfunction),andaparameterthatdetermineshowlikelyingeneralitisthattwo
stimulibelongtothesameclustervs.differentones(theprior).
The assumption that Anderson (1991) made about the latter
parameterdefinesapriorthatisequivalenttotheCRP(Neal,
1998).2
Iteratingthisprocessforeachstimulus,Anderson’smodelpre-
dicted that the most likely clustering of these stimuli is
(11111 ,10101 ,10110),(00000 ,01000),(01011).Thisclustering
doesnotsplitthestimulibythecategorylabel(here,thelast
binaryfeature).However,themodelcangeneratepredictionsfor
thecategorymembershipofnovelstimulibasedonthisclustering
bycalculatingtheprobabilitytheygetassignedtoaclusterand
theprobabilityofthecategorylabelunderthatcluster.Ander-
sonshowedthatthesepredictionsareincloseaccordancewith
the judgments of participants in Medin and Schaffer’s (1978)
experiment,witharank-ordercorrelationof .87.
Anderson’smodelusedaparticularlysimpleinferencealgo-
rithmfortheCRP,allocatingeachstimulustotheclusterwith
highestposteriorprobabilitybasedonpreviousallocations(as
alsodepictedinFig.1).Subsequentworkhasextendedthismodel
toaccommodatedifferentinferencealgorithms(Sanbornetal.,
2010)andsharingofclustersacrosscategories(Griffithsetal.,
2007),applyingtheresultingmodelstoavarietyofresultsin
humancategorylearning(forareview,seeGriffithsetal.,2008).
TheCRPhasvariousdesirablepropertiesthatmakeitasen-
sible choice as a prior over clusters. In the infinite space of
possibleclusterings,itfavorsassigningdatatoasmallnumber
ofclusters.Theexpectednumberofclustersgrowsslowlyasthe
numberofexperiencesbeingclusteredincreases.Inparticular,
theCRPdisplays‘‘preferentialattachment’’ora‘‘rich-get-richer’’
dynamic,whereaclusterwithalargenumberofmembersis
morelikelytogrowfurther.Theresultingdistributionsofcluster
sizes(‘scale-free’distributions,wherethesizeofclustersdecays
asapowerlaw)havebeenshowntobeprevalentacrossseveral
otherdomains(Adamic&Huberman,2002;Barabási&Albert,
1999;Mandelbrot,1960;Rosen&Resnick,1980).
AnotherpracticalreasonforthesuccessoftheCRPisthatit
isagnostictotheorderofdatapresentation(i.e.exchangeable;
Aldous,1985)—changingtheorderofpresentationofexperiences
doesnotchangetheprobabilityoftheirclustermemberships.
ThismakesBayesianinferencemoretractable,asitiseasyto
computeconditionaldistributionsthatarerequiredforstandard
inferencealgorithms(see,e.g.,Gershman&Blei,2012).
2Intuitively, with a fixed probability that two objects belong to the same
cluster(c,thecouplingconstant inAnderson(1991)),itismorelikelythatthe
newobjectshouldbeinthesameclusterasanobjectthatalreadyhasseveral
objectsinit.ThisresultsintheCRPassignmentrulesoutlinedabove,withthe
couplingconstantscalinginverselywith α:α=(1−c)/c.
2
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
Fig. 2.Schematicillustrationoftheclusteringproblem.Inthisschematic,werepresenteachobjectasablueballandeachclusterasabucket.Eachobjecthasno
featuresexceptitsuniqueindex.(A)Wewanttoclusterobject4,conditionedonhowobjects1,2,and3wereclustered.Therearethreepossibilities(indicated
bytheorangearrows):assigntobucket1whichcontainstwoobjectsalready,assigntobucket2withoneobject,orstartanewclusterbyassigningtooneof
theemptybuckets.Inthiswork,theprioroverclusteringsspecifiestheseconditionalprobabilities.(B)Byiteratingtheseconditionalprobabilities,wecanderivea
probabilitydistributionovertherangeofpossiblefinalclusteringsofallobjects(inthiscase5objectsintotal).Thesefinalclusteringsarerepresentedhere,aswell
ashowtheydifferintheentropyofthemarginaldistributionoverclusters.Aprioroverclusteringsspecifiesadistributionoverthesedifferentsolutions.
In addition to these practical properties of the CRP, there
are other reasons why human minds might use this particu-
larpriordistributionforclustering.Initsfirstuseinpsychol-
ogy,Anderson(1991)derivedtheCRPfromtheassumptionthat
anytwoobjectsmusthavethesamefixedpriorprobabilityof
being in the same cluster. This is related to exchangeability,
and might be an assumption justified by the environment in
whichhumansoperate.Intheremainderofthepaperwepur-
sue another hypothesis about the appropriateness of the CRP
forcognitivemodeling:thatCRP-likeclustering(aswellasits
variousproperties,likeexchangeability)mightbeemergentprop-
ertiesofaresource-rationaltendencytobestutilizealimited
representationalbudget.
3. Resource-rational clustering
Asnotedabove,theCRPhasbeenusedtomodelclustering
problemsthatariseinavarietyofdomains.FollowingAnderson’s
(1991)originalapplicationwewillfocusonthecasewherethe
agentseekstoorganizeasetofobjectsintoclusterstosupport
theircategorization(seeFig.2).Weformalizethisproblemas
follows.Wehaveatotalof Nobjects.Sinceweareconcernedwith
examiningtheprioroverclusterings(i.e.,howeachobjectshould
beassignedtoaclusterintheabsenceofanyspecificfeatures),
we assume that these objects do not have any distinguishing
featuresexceptfortheirindex i∈ [0,N].Thegoalistoorganize
theseobjectsintoclusters.Wedonotknowapriorihowmany
clusterstheobjectswillbesortedinto,buttheywillcertainlybe
nomorebythenumberofobjects N.Wethereforeneedtolearna
mapping πfromobjectojforj∈ [0,N)toclustercifori∈ [0,N).
Foranagentwithfinitecognitiveresources,itwillbeimpor-
tanttorepresenttheseobjectsinassimpleawayaspossible
whileallowingforthepotentialdifferencesbetweenthem.We
willderiveapriorbasedonthisideaoflearning‘simpler’map-
pings π,andshowthatthissimplicitypriorcorrespondsclosely
totheCRP.Butfirst,howdowemeasuresimplicity?
3.1. Measuringsimplicityorcomplexity
Ahumanpreferencetowardsimplicity,orOccam’srazor,has
beenusedtoexplainseveralcognitivephenomenainperception,
learningandhigh-levelcognition(Chater&Vitányi,2003),with
theuseofinformationtheorytodefinethis‘simplicity’(e.g.,Bhui
&Gershman,2018;Gottwald&Braun,2019;Olshausen&Field,
1996;Ortegaetal.,2015;Todorov,2009;Zenonetal.,2019).We
followinthistraditionandusethelengthofthecoderequired
to represent a mapping πas a measure of its complexity (as
alsoseenin Chater,1996;Feldman,2016).Alonger(i.e.morecomplex)codehashigherrepresentationalcost.Wemakethis
moreprecisebelow.
Wefirstcomputeanintermediatequantity,themarginaldis-
tributionovercategoriesgivenamapping π:
Pπ(ci)=∑N
jI[π(oj)==ci]
N(1)
Eachmapping πgivesaprobabilitydistributionovercategories.
Wethendefinesimplicityorcomplexityofthisprobabilitydis-
tribution.Whatmakesonedistributionoverclustersmoreorless
complexthananother?
Theentropyofthedistributioncanactasameasureofits
representationalcostandtherebyofitscomplexity.Itisgivenby:
H(π)= −N∑
iPπ(ci)logPπ(ci) (2)
Theinformation-theoreticinterpretationoftheentropyofadis-
tributionisthatitmeasurestheaveragenumberofbits(binary
coinflips)requiredtoconveyanobjectsampledfromthatdistri-
bution,underthemostefficientcodepossible.Thenumberofbits
requiredforci,orthelengthofits‘codeword’,islog Pπ(ci)(under
themostefficientcode;Shannon,1948).Weightingthiscode-
wordlengthbytheprobabilityofeachtokengivestheentropy
ofthedistribution.Intuitively,thismeasureshowdifficultitis
(i.e.howmanybitsofinformationarerequired)toconveywhich
object is sampled, when randomly sampling objects from the
givendistribution,toanobserverthatknowsthedistributionbut
doesnotknowwhichspecificobjectwassampled.Arepresenta-
tionally‘expensive’or‘complex’distributionisonethatrequires
moresuchbits.
Inusingentropyasameasureofrepresentationalcomplexity,
wearefollowingpreviousworkinbothpsychologyandneuro-
science.Workonplanningandsequentialdecision-makinghas
used entropy as a measure of representational cost (Todorov,
2009),andotherworkhassuggestedthatatendencytominimize
thisinformation-theoreticcostiswhatcharacterizesbounded-
rationalbehaviorinagentswithlimitedresources(Olshausen&
Field,1996;Ortegaetal.,2015).Thistendencyhasbeenempiri-
callyvalidated,andusedtomodelneuralrepresentations(Laugh-
lin, 1981) as well as human behavior in high-level cognitive
tasks(Bhui&Gershman,2018).
Howdoesthismeasuremapontoourintuitionsinthisdo-
main? Thelowestentropydistributionisthedistributionthat
allocatesallofitsprobabilitytoasingleoutcome.Here,theen-
tropyiszero,sincesamplesfromthedistributionarealwaysthe
same—thereisnoinformationtobetransmittedaboutaspecific
sample.Ontheflipside,thehighestentropydistributionisone
thatisuniformoveralloutcomes.Here,sincealloutcomesare
3
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
equallylikely,eventhemostefficientcodehastoconveywhich
ofseveralpossibilitieswasactuallychosenatagivensample.
Otherdistributionsfallinbetween,asmeasuredbyEq.(2).Inthe
contextofourclusteringproblem,thismapsontotheintuition
thatitiseasytoremembertoalwaysputeveryobjectinthesame
cluster(alowentropydistribution,lowerrepresentationalcost),
buthardertorememberdifferentclustersforeachobject—with
theextremebeingtohaveaseparateclusterforeachobject(a
highentropydistribution,higherrepresentationalcost).
Wewanttousethismeasureofcomplexitytoinformaprob-
ability distribution P(π) over mappings π. We do have some
representationalcapacityandwouldliketobestutilizeit—we
donotwanttosimplyminimizetheentropyanddefaulttothe
zeroentropy π.Insteadweassumesomefixedaveragerepre-
sentationalcapacity—thismeansthatthemeanentropyaveraged
overallclusterings(weightedby P(π))isfixed.Sincethenumber
ofpossibleclustersisinfinite,theentropycangrowarbitrarily
large.Theprobabilityofthesehighentropydistributionsmust
becorrespondinglylowtoaccommodatefiniterepresentational
capacityinexpectation.Wethereforewillpreferlowcomplexity
(low entropy) mappings over higher complexity ones. Exactly
how should this preference decay as a function of complex-
ity/entropy?Thenegativeexponentialisthemaximumentropy
distributionforafixedmean(notethat‘entropy’in‘maximum
entropy’herereferstotheentropyofthepriorprobabilitydistri-
butionP(π)—notH(π)whichistheentropyofthemapping π).
Thisgives:
P(π)=exp(−kH(π))∑
π′exp(−kH(π′))
∝exp(−kH(π)) (3)
wherekisapositiveconstantandthenormalizingfactorsums
overallpossiblemappings π′.Itis(bytheprincipleofmaxi-
mumentropy;Shore&Johnson,1980)themostgeneral,least
informativedistributiongivenafixedmean,i.e.afixedaverage
representational capacity. This is the same logic used to de-
riveotherassumption-freedistributionswithfixedmeane.g.the
negativeexponentialfreeenergyfunctional(Ortegaetal.,2015).
Wehavethereforespecifiedapriorprobabilitydistribution
P(π) over different clusterings πfor when we have a fixed
meanrepresentationalcapacity,whererepresentationalcostofa
mappingisgivenby H(π).Inthefollowingsections,weshowthat
theCRPcorrespondstoexactlysuchaprobabilitydistribution.
3.2. TherelationshipbetweentheCRPandentropy
WefirstdiscussthekeypropertiesoftheCRP.Thekeyprop-
ertyoftheCRPisthewayanewobjectisaddedtoanexisting
clusteringofstates:
1.Assign it to an existing cluster with probability propor-
tionaltothenumberofitemsalreadyinthecluster.
2.Assignittoanewclusterwithfixedprobability α.
This ‘‘rich-get-richer’’ or ‘‘preferential attachment’’ also arises
when trying to reduce entropy. Adding an object to a cluster
that already has high probability reduces the entropy of the
distributionbymakingitpeakier.Ontheotherhand,addingit
toalesspopulatedonemovesthedistributionclosertouniform,
increasingitsentropy.Therefore,addinganewobjecttoacluster
thatalreadyhasmanyobjectsinitresultsinlesscostforrep-
resentingthatdistributionthanaddingittoonethathasfewer
objects.
We can formalize this intuition. By Eq. (3), the entropy of
amappingspecifiesitspriorprobability.Wecancomputethe
entropiesofallthemappingsthatarisefromdifferentpossibleassignment of a new object to an existing mapping. Inserting
theseentropiesintoEq.(3)specifiesaprobabilitydistribution
over the possible assignments of the new object. This way of
assigningnewobjectstoclustersisnotarbitrary.Rather,itisnor-
mative,undertheresource-rationalassumptionthatwewantto
bestutilizelimitedrepresentationalresources.Wetherebyprefer
mappingswithlowcomplexity(andhencelowrepresentation
cost),seeSection3.1fordetails.
Inthefollowingsection,weprovidethemathematicaldetails
oftheaboveprocedure.
4. A mathematical derivation
In this section, we show that assigning new objects based
onprobabilityunderEq.(3)recoverstheCRP’snewobjectas-
signmentruleswhenthenumberofobjectsbeingclassifiedis
reasonablylarge.
4.1. ConditionaldistributionsoftheCRPandentropy-basedpriors
Givenaclusterassignment π,anobjectisaddedtogive π′.
Thiscanbesplitintotwocases,wheretheobjectiseitheradded
toclusterjtogive πjoritisaddedtoanewcluster(i.e.onewith
nj=0)togive π0.FormalizingtheCRPintheseterms:
Pcrp(πj|π)∝nj
Pcrp(π0|π)∝α (4)
The entropy-based prior over possible cluster assignments
(as given in Eq. (3)) is proportional to the negative exponent
of the entropy of the mapping π. The equivalent conditional
distributionsforthisprioraregivenby(detailsinAppendixA):
Pentropy(πj|π)∝exp(
−k(Hj−H))
Pentropy(π0|π)∝exp(
−k(H0−H))
(5)
wheretheentropyofamapping(specifyingaclusterassignment
ofNobjects,witheachclustercontaining niobjects)isgivenby:
H= −N∑
ini
Nlogni
N(6)
Inthenextsection,wecomputethedifferences Hj−HandH0−H
intermsofnjtomoredirectlycomparetheconditionalsderived
fromtheCRPandtheentropy-basedprior.
4.2. Theeffectofaddinganobjectonentropy
Whenaddinganewobject,wecanaddittoanexistingcluster
jtogive Hj.
Hj= −K∑
i\jni
N+1logni
N+1−nj+1
N+1lognj+1
N+1
Wetakethedifferencewith HinEq.(6)andseparateoutthe
termsindependentof nj(denotedE)fromthosedependenton
nj.Thissimplifiestothefollowing(seeAppendixBfordetailed
steps):
Hj−H=E−njlog(
1+1
nj)
N+1−log(nj+1)
N+1
Wethentakethelarge Nlimitandconsideronlytheleading
orderterms.Todecidewhichofthesetermsareleading,weneed
tomakeanassumptionabouttherelationbetweenthenumberof
objectsinacluster( nj)andthetotalnumberofobjects( N).We
maketheweakassumptionthattheaveragenumberofobjects
4
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
inaclustergrowssub-linearlywiththetotalnumberofobjects—
thisholdsaslongas(a)notallobjectsareassignedtothesame
clustersor(b)notallobjectsareassignedtoanewcluster.In
otherwords,both njandNgrowwhenNislarge,but njgrows
slower.WecanthereforeTaylorexpandthefirsttermandkeep
onlyleadingorderterms.Thissimplifiestothefollowing,detailed
stepsinAppendixB:
Hj−H∼E−log(nj)
N−1
N(7)
Wehavesofarderivedthechangeinentropywhenweaddan
objecttoanexistingclusterwith njobjects.Thechangetoentropy
whenweinsteadaddanobjecttoanemptyclusterisgivenby
nj=0(beforeapplyinganyapproximationsinthelarge Nlimit):
H0−H=E
4.3. Fromentropytoprobabilitydistribution
Substituting Hj−HandH0−Hinto the expressions for
theconditionaldistributionsoftheentropy-basedprior(Eq.(5))
gives:
P(πj|π)∝exp(
−k(Hj−H))
=exp[
−k(
E−log(nj)
N−1
N)]
Wefixk=N;thisconstantk(fromEq.(3))isthereforenota
parameter.
P(πj|π)∝exp(−NE)×nj×e
∝nj
Similarly,forwhenweareaddingtoanewcluster:
P(π0|π)∝exp(
−N(H0−H))
=exp(−NE)
Wecanthennormalizetheprobabilityofthenewclusteringsas
follows:
P(πj|π)=P(πj|π)
P(π0|π)+∑
iP(πi|π)=nj
N+1/e
P(π0|π)=P(π0|π)
P(π0|π)+∑
iP(πi|π)=1/e
N+1/e
ThisisequivalenttoaCRP(asspecifiedinEq.(4))with α=1/e≈
0.36.NotethatwecangetacorrespondingCRPwithadifferent α
bytakingthelogarithminEq.(2)andtheexponentinEq.(3)with
respecttoadifferentbase.Thebaseofthelogarithmisrestricted
tobegreaterthan1,toensurethatthelogarithmisanincreasing
function, but is arbitrary beyond this constraint. We can thus
derivethefullfamilyofCRPdistributionsfor0 < α <1.
4.4. Summary
Ourgoalwastoexaminetheconsequencesoflimitedcognitive
resourcesontheclusteringprocess.Wefindthatapriorover
clustersproportionaltothenegativeexponentoftheentropyof
theclustermappinggivesCRP-likeclustering.Thispriorstrongly
prefers lower entropy mappings to higher entropy ones. This
indicatesthatCRP-likeclusteringmightcomefromatendencyto
reducerepresentationalburden.Inotherwords,CRP-likecluster-
ingcancomefromabiastowardlearning‘simple’object-category
mappings, where simplicity is defined as the entropy of the
marginalovercategories.Thisprovidesaprocess-leveltheoryfor
whyCRP-likepriorsmightbeappropriateformodelinghuman
cognition.5. Simulations
Our mathematical results establish a direct correspondence
betweenlimitedrepresentationalcapacityandtheCRP,inthe
limitsofalargenumberofobjects.Todeterminewhetherthe
clusteringproducedbythisresource-rationalclusteringscheme
producesresultssimilartothoseexpectedfromaCRPwithre-
alisticsamples,weconductedaseriesofsimulationswherewe
generated cluster assignments for both distributions and then
analyzethecorrespondence.
ThecorrespondencebetweentheCRPandtheentropy-based
distributioniscloserasthenumberofobjects( N)increases.At
very lowNtherefore, these distributions deviate slightly (see
AppendixCfordetails).Sincesubsequentclusteringbehavioris
conditionedstronglyontheobjectassignmentsthusfar,these
differences can amplify. That is, even though the conditional
distributionsgetcloserwithhigher N,themarginaldistributions
deviate further. The resultant distribution is still qualitatively
verysimilartotheCRP(asdiscussedbelow),anditisaninterest-
ingdirectionoffutureworktoexaminewhetherthisdistribution
mightbetterdescribehumanclusteringbehaviorthantraditional
CRPs.Here,however,tovalidatethesimilaritieswiththeCRP,
wecontrolforthisdeviationbyclusteringthefirst Mobjects
accordingtoanexactCRP.
WeevaluatedthecorrespondencebetweentheCRPandour
resource-rationaldistributionbasedontwocriteria.First,aprop-
ertycharacteristicofscale-freedistributionsliketheCRPisthat
thesizesofthedifferentclustersdecayasapowerlaw.There-
fore,ifwesorttheclustersbysize,thelogarithmofthecluster
size(i.e.thefractionofthetotalnumberofobjectsthatarein
thatcluster)shouldbealinearfunctionofthelogarithmofthe
clusterindex.Second,anotherkeypropertyoftheCRPisthat
thenumberofclustersincreaseslogarithmicallywiththetotal
numberofobjects.Wecanalsomeasurethatforourentropy-
baseddistributionandexaminewhetherthenumberofclusters
isalinearfunctionofthelogarithmofthenumberofobjects.
5.1. Method
Wegeneratedsamplesfromourdistributionasfollows.We
first clusterMobjects according to a CRP with α=0.368,
varyingMbetween0and80.Wethenclusteranadditional106
objectsfromthisstartingpoint,eitherwiththeCRP’sconditional
clusterassignmentrules,orbasedontheentropyasspecified
byEq.(3).Weclustersuchalargenumberofobjectstogeta
reasonablenumberoftotalclusterssothatwecanbetteranalyse
thedistributionsofobjectsacrossthem—evenwiththesemany
objects,theaveragenumberoftotalclustersisaround6.This
procedureisrepeatedtoget20uniquesetofclustersforeach
clusterassignmentruleandeach M.
5.2. Results
Wefindthattheentropy-baseddistributionproducescluster-
ingsthathavestatisticalpropertiesthatcloselyresemblethose
fromtheCRP.Toprovideaninitialillustrationofthecorrespon-
dence, we focus on the case where M=20. Fig. 3A shows
how the size of the clusters decays as a power law over the
clusterindexrankedbysize.Fig.3Bplotstheseinlog-spaceand
highlightsthelinearrelationshipcharacteristicofapowerlaw
liketheCRP.
Itisespeciallyinstructivetocomparethislinearpatternwith
whatmightbeexpectedfromothersensiblepriorsoverclusters.
Forexample,oneswheretheclustersizedecaysexponentially
withrank(intuitively,eachclusterissomefixedfactor λsmaller
thanthenextlargestone).Thispriornotablydoesnotshowthis
5
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
Fig. 3.Simulationresults:(A)Clustersizescalesasapowerlawwithclusterrank.(B)Thisrelationshipishighlightedinlogspace.Theaveragebehaviorofthe
twoclusteringassignmentsresemblesthelinearfit(dottedlines).(C)Thenumberofclustersgrowslogarithmicallywiththenumberofobjects.(D)Thecorrelation
betweentheclustersizesfromthetwoclusteringsincreaseswith M,butlevelsoff.
Fig. 4.Alternative priors: Log cluster size as a function of log rank for an
exponential prior over ranked clusters. The dotted line shows a linear fit to
thedata:thisfitdoesnotexplainthisdatawell.Incomparison,Fig.3Bshows
thattheentropy-basedpriorfollowsthesamelineartrendastheCRP.
linearstructure(asplottedinFig.4).Itisthereforeparticularly
remarkablethatourentropy-derivedprioralsoshowsthislinear
trend.
Fig.3Cshowsthechangeintheaveragenumberofclusters
(overthe20runs)asafunctionofthelogarithmofthenumber
ofobjects.Weseethatthisgrowsclosetolinearly,withthelinear
fit(dottedline)closelymatchingthedata.Thedeviationismost
apparentatsmallernumbersofobjects,asexpectedsincethe
linearrelationshipisexpectedinthelimit.Thenumberofclusters
fromtheCRPisslightlybutnotstatisticallysignificantlyhigher
thanthatfromtheentropy-basedclustering.
Toprovideamoredetailedpictureofthiscorrespondence,we
examinedthecorrelationbetweentheclustersizes(i.e.thedata
plottedinFig.3A,matchedbyclusterindex)forthefirst500objects.Clusterassignmentisstochasticandwecannotexpect
exactcorrespondence.Togetasenseoftheupperlimitofthe
correlationwecanexpectfromthismeasure,wefirstcorrelate
theclustersizesderivedfromindependentrunsofthesameclus-
teringalgorithm,repeated4times.Thisgivesacorrelationof0.72
(95%CI [0.69,0.74])fortheCRP,and0.74(95%CI [0.71,0.76])
forentropy-basedclusteringwith M=80.Thisshowsthateven
withtheclustersizesproducedbythesamealgorithm,wecannot
expectacorrelationof1.
We then look at the correlation between the CRP and the
entropy-basedclusteringafterinitializingwithvariable M.We
expectthiscorrelationtoimproveas Mincreases.Weseethat
thisisindeedthecase(Fig.3D),withthecorrelationat M=
80 being comparable to the correlation between two runs of
thesameclusteringalgorithm(CRPorentropy-based).Wealso
seethatthedifferenceincorrelationfromsmalltolarge Mis
notverydramatic(variesfrom ∼0.58to ∼0.73)andappearsto
leveloff.ThisindicatesthatthecorrespondencebetweentheCRP
andtheentropy-basedpriorisfairlyrobusttothevalueofthe
initializationM.
6. Discussion
Needingtoclusterexperiencestogetherisaubiquitousaspect
of human cognition. In this paper, we have approached this
problemfromtheperspectiveofrationalanalysis,askinghowan
idealagentshouldseektousetheirlimitedcognitiveresources.
Ourresultsshowthatthesolutiontothisproblem,whenthose
resources are expressed in information-theoretic terms, has a
directcorrespondencewithanapproachtoclusteringthathas
beenwidelyusedinprobabilisticmodelsofcognition(theChi-
nese restaurant process, or CRP). These results provide a new
cognitively-motivatedjustificationforthatassumption.
6
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
Ourfindingssuggestinterestingdirectionsforfutureempirical
work.IfCRP-likeclusteringcomesfromrepresentationalcosts,
manipulating these costs should result in different clustering
behaviors.Ourmodelpredictsthathavingmorelimitedcogni-
tiveresourcesshouldaffectclusteringbehavior,drivingtoward
a lower entropy representation and a stronger preference for
few,large,clusters.GershmanandCikara(2021)modeltheef-
fectsofcognitiveloadonstructurelearningasareductionin
theconcentrationparametergivingfewerclusters.Ourapproach
providestheoreticjustificationforwhyfewercognitiveresources
(e.g.undercognitiveload)shouldgiverisetofewerclusters.This
wouldnotbepredictedbyatraditionalCRPmodel,sinceitisa
consequenceofthecognitiveresourcesavailableandnotachange
inthebeliefsoftheagentabouttherelativepriorprobabilityof
differentclusterings.
Inourpaper,wedonotassumethatthedataaregenerated
fromaground-truthsetofclusters,ratherthatclusteringarises
solelyattherepresentationallevelfromlimitationsincapacity.
Correspondingly,wemakenoassumptionsaboutthelikelihood
functionthatinformswithin-clusterstructure–wefocusentirely
ontheapriorinumberofclusters,assumingthedatahavenodis-
tinguishingfeaturestoclusteronthebasisof.However,theseare
crucialaspectsofreal-worldclusteringbehaviorandfuturework
shouldlooktowardhowtheyinteractwithaprioriclustering
drivenbyrepresentationallimitsaspositedhere.Acommoncrit-
icismofBayesianmodelsofcognitionistheirlackofgroundingin
process-levelconsiderations,andtheriskthatthepriorsspecified
inthesemodelscanbearbitrarilychosenbypractitionerstofit
data(Bowers&Davis,2012;Jones&Love,2011).Ourworkexem-
plifiesonewaytospecify‘effectivepriors’thatareinformedand
constrained byalgorithmic considerations—andin factdirectly
arisefromresourcelimitationsatthisalgorithmiclevel.Further,
posteriorinferencewitharbitrarypriorsisoftencomputationally
intractable;thisapproachalsosuggestsatractableprocess-level
model.Thisraisesthebroaderquestionoftheepistemicroleof
thepriorinBayesianmodelsofcognition—whetheritrepresents
pre-existingknowledge,oremergentpropertiesofthealgorithm.
Ourworkhighlightsthatthisdifferencecanbenuanced.
Anopenquestioniswhetherotherubiquitouspriorsassumed
inprobabilisticmodelsofhumancognitionmightinsteadarise
fromthealgorithmicprocessesinvolvedinlearningandrepre-
sentation.Variouspriorsoverneuralnetworkmodelshavebeen
showntobeeffectivelyimplementedbyestablishedalgorithmic
approacheslikeweightdecay(Krogh&Hertz,1991),earlystop-
ping(Duvenaudetal.,2016),anddropout(Gal&Ghahramani,
2016).Thefieldofprobabilisticnumerics(Hennigetal.,2015)
hasalsoshownthatseveralclassicapproximatealgorithmsin
quadrature, linear optimization, and solving differential equa-
tionscanbereinterpretedasexactsolutionsunderspecificpri-
ors.Theseapproaches(e.g.neuralnetworks,linearoptimization)
arecommonlyusedinprobabilisticmodelsofcognition.Explor-
ingthisduality(betweenalgorithmicprocessandcomputational
prior)withintheseapproaches–andthereforetherolethese
approachesplayinmodelingcognition–isapromisingdirection
offuturework.
Acknowledgments
ThisworkwasdonewhileIDwasattheDepartmentofCom-
puterScience,PrincetonUniversity.WearegratefultoMichael
Chang,JonCohen,ZackDulberg,NoémiÉltető,SamGershman,
MarkHo,EricSchulz,SimonSegert,andShuchenWuforhelpful
discussions.ThisworkwassupportedbyDARPA,UnitedStates
ofAmericagrantFA8650-18-2-7832andAFOSR,UnitedStatesof
AmericagrantFA9550-18-1-0077.Appendix A. Conditional distributions for the entropy-based
prior
Wegivefurtherdetailonhowtoderivetheconditionaldistri-
butionsfromthenegativeexponentialprobabilityovermappings
π.Westartwith:
P(π)=exp(−kH(π))∑
π′exp(−kH(π′))
∝exp(−kH(π))
Addinganobject oitoclusterjgives πj.Theprobabilityofthis
newmappingisgivenby P(πj)∝exp(−kH(πj)).Wecanthen
considertheprobabilityof P(πj)conditionedonalreadyhaving
themapping π.
P(πj|π)=P(πj, π)
P(π)
Notethat πjspecifiesasupersetof π,i.e.πspecifiesthecluster
mappingofobjects o0. . .oi−1,while πjadditionallyspecifiesthe
mappingofoi.SoP(π, πj)=P(πj).Thisreducestheconditional
distributionto
P(πj|π)=P(πj)
P(π)∝exp(−k(H(πj)−H(π)))
IntermsofCRPnotationthatdirectlyrepresentstheprobabil-
ities of the cluster assignments zi=π(oi), we haveP(zi=
j|z1, . . . ,zi−1)=P(πj|π).
Appendix B. Simplifying the difference in entropy
Thedifferenceintheentropybetweenthenewandtheold
distributionsis:
Hj−H=K∑
i\jni(
1
N−1
N+1)
logni
+nj
Nlognj−nj+1
N+1log(nj+1)
+log(N+1)−logN
We want to separate out the terms dependent on nj, so we
separateoutthefirsttermas:
K∑
i\jni(
1
N−1
N+1)
logni
=K∑
inilogni
N(N+1)
−njlognj
N(N+1)
Thedifferencethereforereducestothefollowingterms,with E
representingthetermsindependentof nj:
Hj−H=E+nj
Nlognj−nj+1
N+1log(nj+1)−nj
N(N+1)lognj
Here,thetermindependentof nj,denotedE,isgivenby:
E=K∑
inilogni
N(N+1)+log(N+1)−logN
Wefurthersimplifythetermsdependenton nj:
nj
Nlognj−nj+1
N+1log(nj+1)−nj
N(N+1)lognj
=Nnjlognj+njlognj−Nnjlog(nj+1)−Nlog(nj+1)−njlognj
N(N+1)
7
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
Fig. C.5.Evaluating the approximation: We plot the terms in the exponents
ofEq.(C.1)asafunctionofincreasingnumberofobjectsinacluster( nj).Wesee
thatthedifferencebetweentheapproximationandexactvaluereducesquickly.
= −1
N(N+1)[
Nnjlog(
1+1
nj)
−Nlog(nj+1)]
= −njlog(
1+1
nj)
N+1−log(nj+1)
N+1
Taylorexpandingthefirsttermtoleadingordergives:
−njlog(
1+1
nj)
N+1≈ −nj(
1
nj−1
2n2
j)
N+1
InthelargeNlimit,andcorrespondinglylarge njlimit,weassume
N≈N+1andignorenon-leadingterms,toget:
−nj(
1
nj−1
2n2
j)
N+1=−1
N+1+1
2nj(N+1)≈ −1
N(B.1)
Appendix C. Evaluating the large N approximation
HerewerevisittheapproximationmadetoarriveatEq.(7)or
Eq.(B.1).Ifwehadnotmadetheapproximationrequiredtoelim-
inatetheextraterm,wewouldhaveanadditionaldependenceon
njasfollows:
P(πj)|π∝exp(
−N(Hj−H))
=exp(−NE)×nj×exp(njlog(1+1/nj))
Inoursimplification,wearemakingthefollowingapproxima-
tion:
exp(njlog(1+1/nj))∼exp(1) (C .1)
WeplottheseexponentsinFig.C.5togiveasenseforwhen
thisisagoodapproximation.Weseethatevenatsmall nj,the
valuesarerelativelyclose,withtheapproximationconverging
quickly.
Werestrictouranalysestothecorrespondenceofthecondi-
tionaldistributions P(πN|πN−1)betweentheentropy-baseddis-
tributionandtheCRP,ratherthandirectlyexaminingthejoint
distributionP(πN).Thisisbecausecomputingthenormalization
factorfortheconditionaldistributionfortheentropy-baseddis-
tribution(beforemakingtheapproximationabove)dependson
thedistributionofobjectsinthepreviousstep—unlikeafterwe
make the approximation when the normalization factor goes
toNe+1.Thismakesthepre-approximationjointdistribution
difficulttocompute.References
Adamic,L.A.,&Huberman,B.A.(2002).Zipf’slawandtheinternet. Glottometrics,
3(1),143–150.
Aldous, D. J. (1985). Exchangeability and related topics. In École d’été de
probabilitésdesaint-flourXIII—1983,Vol.1117 (pp.1–198).Springer.
Anderson,J.R.(1991).Theadaptivenatureofhumancategorization. Psychological
Review,98(3),409–429.
Barabási,A.L.,&Albert,R.(1999).Emergenceofscalinginrandomnetworks.
Science,286(5439),509–512.
Bhui, R., & Gershman, S. J. (2018). Decision by sampling implements efficient
codingofpsychoeconomicfunctions.. PsychologicalReview ,125(6),985–1001.
Bowers,J.S.,&Davis,C.J.(2012).Bayesianjust-sostoriesinpsychologyand
neuroscience..PsychologicalBulletin ,138(3),389.
Chater,N.(1996).Reconcilingsimplicityandlikelihoodprinciplesinperceptual
organization..PsychologicalReview ,103(3),566.
Chater, N., & Vitányi, P. (2003). Simplicity: a unifying principle in cognitive
science?TrendsinCognitiveSciences ,7(1),19–22.
Duvenaud,D.,Maclaurin,D.,&Adams,R.(2016).Earlystoppingasnonparametric
variationalinference.In Artificialintelligenceandstatistics (pp.1070–1077).
PMLR.
Feldman,J.(2016).Thesimplicityprincipleinperceptionandcognition. Wiley
InterdisciplinaryReviews:CognitiveScience ,7(5),330–340.
Gal,Y.,&Ghahramani,Z.(2016).Dropoutasabayesianapproximation:Repre-
senting model uncertainty in deep learning. In International Conference on
MachineLearning (pp.1050–1059).PMLR.
Gershman, S. J., & Blei, D. M. (2012). A tutorial on Bayesian nonparametric
models.JournalofMathematicalPsychology ,56(1),1–12.
Gershman,S.J.,Blei,D.M.,&Niv,Y.(2010).Context,learning,andextinction.
PsychologicalReview ,117(1),197.
Gershman,S.J.,&Cikara,M.(2021).Structurelearningprinciplesofstereotype
change.http://dx.doi.org/10.31234/Osf.Io/52f9c,PsyArXiv.
Gershman,S.J.,Horvitz,E.J.,&Tenenbaum,J.B.(2015).Computationalrational-
ity:Aconvergingparadigmforintelligenceinbrains,minds,andmachines.
Science,349(6245),273–278.
Goldwater, S., Griffiths, T. L., & Johnson, M. (2009). A Bayesian framework
forwordsegmentation:Exploringtheeffectsofcontext. Cognition,112(1),
21–54.
Gottwald, S., & Braun, D. A. (2019). Bounded rational decision-making from
elementarycomputationsthatreduceuncertainty. Entropy,21(4),375.
Griffiths, T. L., & Austerweil, J. (2008). Analyzing human feature learning as
nonparametricBayesianinference. AdvancesinNeuralInformationProcessing
Systems,21,97–104.
Griffiths,T.,Canini,K.,Sanborn,A.,&Navarro,D.(2007).Unifyingrationalmodels
ofcategorizationviathehierarchicalDirichletprocess.In Proceedingsofthe
annualconferenceofthecognitivesciencesociety .
Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational use of cognitive
resources:Levelsofanalysisbetweenthecomputationalandthealgorithmic.
TopicsinCognitiveSciences ,7(2),217–229.
Griffiths,T.L.,Sanborn,A.N.,Canini,K.R.,&Navarro,D.J.(2008).Categorization
as nonparametric Bayesian density estimation. In N. Chater, & M. Oaks-
ford (Eds.),The Probabilistic Mind: Prospects for Bayesian Cognitive Science
(pp.303–328).Oxford,UK:OxfordUniversityPress.
Hennig, P., Osborne, M. A., & Girolami, M. (2015). Probabilistic numerics and
uncertaintyincomputations. ProceedingsoftheRoyalSocietyA:Mathematical,
PhysicalandEngineeringSciences ,471(2179),Article20150142.
Hjort, N. L., Holmes, C., Müller, P., & Walker, S. G. (2010). Bayesian
nonparametrics.CambridgeUniversityPress.
Jones,M.,&Love,B.C.(2011).Bayesianfundamentalismorenlightenment?On
theexplanatorystatusandtheoreticalcontributionsofBayesianmodelsof
cognition.BehavioralandBrainSciences ,34(4),169.
Kemp,C.,Tenenbaum,J.B.,Niyogi,S.,&Griffiths,T.L.(2010).Aprobabilistic
modeloftheoryformation. Cognition,114(2),165–196.
Krogh,A.,&Hertz,J.(1991).Asimpleweightdecaycanimprovegeneralization.
AdvancesinNeuralInformationProcessingSystems ,4.
Laughlin,S.(1981).Asimplecodingprocedureenhancesaneuron’sinformation
capacity.ZeitschriftfürNaturforschungC ,36(9–10),910–912.
Lieder, F., & Griffiths, T. L. (2019). Resource-rational analysis: Understanding
human cognition as the optimal use of limited computational resources.
BehavioralandBrainSciences ,43,1–85.
Mandelbrot, B. (1960). The Pareto-Levy law and the distribution of income.
InternationalEconomicReview ,1(2),79–106.
Medin,D.L.,&Schaffer,M.M.(1978).Contexttheoryofclassificationlearning.
PsychologicalReview ,85(3),207.
Neal,R.M.(1998). MarkovchainsamplingmethodsforDirichletprocessmixture
models:Technicalreport9815 ,DepartmentofStatistics,UniversityofToronto.
Olshausen,B.A.,&Field,D.J.(1996).Naturalimagestatisticsandefficientcoding.
Network.ComputationinNeuralSystems ,7(2),333–339.
Ortega, P. A., Braun, D. A., Dyer, J., Kim, K. E., & Tishby, N. (2015).
Information-theoreticboundedrationality.arXivpreprintarXiv:1512.06789.
8
I.DasguptaandT.L.Griffiths JournalofMathematicalPsychology109(2022)102675
Rosen,K.T.,&Resnick,M.(1980).Thesizedistributionofcities:anexamination
oftheParetolawandprimacy. JournalofUrbanEconomics ,8(2),165–186.
Sanborn,A.N.,Griffiths,T.L.,&Navarro,D.J.(2010).Rationalapproximations
torationalmodels:alternativealgorithmsforcategorylearning. Psychological
Review,117(4),1144–1167.
Shannon,C.E.(1948).Amathematicaltheoryofcommunication. TheBellLabs
TechnicalJournal ,27(3),379–423.Shore,J.,&Johnson,R.(1980).Axiomaticderivationoftheprincipleofmaximum
entropyandtheprincipleofminimumcross-entropy. IEEETransactionson
InformationTheory ,26(1),26–37.
Todorov,E.(2009).Efficientcomputationofoptimalactions. Proceedingsofthe
NationalAcademyofSciences ,106(28),11478–11483.
Zenon, A., Solopchuk, O., & Pezzulo, G. (2019). An information-theoretic
perspectiveonthecostsofcognition. Neuropsychologia ,123,5–18.
9
|
5d4f6d8a-6aad-4385-8453-736c1c3c29f9
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
What considerations influence whether I have more influence over short or long timelines?
As my TAI timelines have been shortening, I've been rethinking my priorities. As have many of my colleagues. It occurs to us that there are probably general considerations that should cause us to weight towards short-timelines plans, or long-timelines plans. (Besides, of course, the probability of short and long timelines) For example, if timelines are short then maybe AI safety is more neglected, and therefore higher EV for me to work on, so maybe I should be systematically more inclined to act as if timelines are short.
We are at this point very unsure what the most important considerations are, and how they balance. So I'm polling the hive mind!
|
8d9bb5b2-5256-4ceb-b4a5-5141a8656dc1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How do you decide to phrase predictions you ask of others? (and how do you make your own?)
I'd like your practical advice and learned experience: how do you go about phrasing predictions when you ask people "Is X going to happen?" - in order to get answers that reflect the actual thing you're trying to get a read on?
Now I'll use a fictitious country with a fictious sitting president running for reelection - let's say I ask for the likelihood of:
> "Will Sum Bodee[1] win the Ruritania Presidential Election this year?"
Even this isn't as straight forward as it seems because it could be motivated by either
a. a desire to know if he will remain President of Ruritania for whatever reason
b. a desire to know what do the people you're asking believe will be the outcome of the election, because you're not interested in the actual administration of the country, but you're interested in sentiment.
See what happens when I ask an intimately related question:
> "How many votes will Sum Bodee win in the Ruritania Presidential Election this year?"
I think (and if you disagree - please speak up!) this makes it much more clear that the emphasis is less on "who will run the country?" but more on "who do people think will run the country?". Let me imagine some context, let's say that Ruritania was recently caught up in a spying scandal where the equally fictitious country of Buranda was caught red handed spying on members of Bodee's inner cabinet. Now this is where I get confused, because I feel like the phrasing I've given could be asking many different things in this context:
a. what is the impact of the spying scandal on Bodee's electability?
b. what do people think is the impact of the spying scandal, and how influential do they think topics like that are?
c. maybe I'm aware of some other scandals or issues, and curious about the awareness of people about them
d. Maybe I'm posing the question to Bodee loyalists and want to see if the scandal has changed their optimism at all?
And so on and so on... any given phrasing could have multiple intentions - so how d
|
111d1c7e-9dde-44ed-bf8b-5784d9bfda1c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to take smart notes (Ahrens, 2017)
This is my rephrasing of (Ahrens, 2017, How to Take Smart Notes). I added some personal comments.
The amazing note-taking method of Luhmann
To be more productive, it's necessary to have a good system and workflow. The Getting Things Done system (collect everything that needs to be taken care of in one place and process it in a standardised way) doesn't work well for academic thinking and writing, because GTD requires clearly defined objectives, whereas in doing science and creative work, the objective is unclear until you've actually got there. It'd be pretty hard to "innovate on demand". Something that can be done on demand, in a predetermined schedule, must be uncreative.
Enter Niklas Luhmann. He was an insanely productive sociologist who did his work using the method of "slip-box" (in German, "Zettelkasten").
Making a slip-box is very simple, with many benefits. The slip-box will become a research partner who could "converse" with you, surprise you, lead you down surprising lines of thoughts. It would nudge you to (number in parenthesis denote the section in the book that talks about the item):
* Find dissenting views (10.2, 12.3)
* Really understand what you learned (10.4, 11.2, 11.3, 12.6)
* Think across contexts (12.5)
* Remember what you learned (11.3, 12.4)
* Be creative (12.5, 12.6, 12.7, 13.2)
* Get the gist, not stuck on details (12.6)
* Be motivated (13.3)
* Implement short feedback loops, which allows rapid improvements (12.6, 13.5)
Four kinds of notes
FLEETING NOTES
These are purely for remembering your thoughts. They can be: fleeting ideas, notes you would have written in the margin of a book, quotes you would have underlined in a book.
They have no value except as stepping stones towards making literature and permanent notes. They should be thrown away as soon as their contents have been transferred to literature/permanent notes (if worthy) or not (if unworthy).
Examples:
> Jellyfish might be ethically vegan, since they have such
|
17608691-5b38-46a8-bc67-fba41bf6f88e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
EA & LW Forum Weekly Summary (6th - 12th March 2023)
This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
If you'd like to receive these summaries via email, you can subscribe here.
Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!
Philosophy and Methodologies
Model-Based Policy Analysis under Deep Uncertainty
by Max Reddel
The author explains how policy researchers can support decision-making with simulation models of socio-technical systems, even under deep uncertainty.
They first suggest systems modeling (eg. agent-based models). For example, agent-based modeling was used here to simulate how different individuals with different characteristics (age, health status, social network) might behave during an epidemic, and how that would affect spread and the relative effectiveness of different interventions.
However, many political decisions have even less certainty. ‘Deep uncertainty’ is uncertainty on the system model, the probability distributions over inputs to them, and which consequences to consider and their relative importance. In this scenario, computational modeling can be helpful to explore the implications of different assumptions about uncertain / contested / unknown model parameters and mechanisms. The aim is to minimize plausible future regret via finding vulnerable scenarios and policy solutions that are robust to them - instead of predicting expected effect. They provide several techniques and examples for this.
Object Level Interventions / Reviews
AI
Why Not Just... Build Weak AI Tools For AI Alignment Research?
by johnswentworth
The author argues there’s room for reasonably-large boosts to alignment research from “weak” cognitive tools like Google search. The problem is that the majority of people
|
f0619f51-2d23-426d-997c-e394aa42bf05
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Formative Youth
Followup to: Against Maturity
"Rule of thumb: Be skeptical of things you learned before you could read. E.g., religion."
-- Ben Casnocha
Looking down on others is fun, and if there's one group we adults can all enjoy looking down on, it's children. At least I assume this is one of the driving forces behind the incredible disregard for... but don't get me started.
Inconveniently, though, most of us were children at one point or another during our lives. Furthermore, many of us, as adults, still believe or choose certain things that we happened to believe or choose as children. This fact is incongruent with the general fun of condescension - it means that your life is being run by a child, even if that particular child happens to be your own past self.
I suspect that most of us therefore underestimate the degree to which our youths were formative - because to admit that your youth was formative is to admit that the course of your life was not all steered by Incredibly Deep Wisdom and uncaused free will.
To give a concrete example, suppose you asked me, "Eliezer, where does your altruism originally come from? What was the very first step in the chain that made you amenable to helping others?"
Then my best guess would be "Watching He-Man and similar TV shows as a very young and impressionable child, then failing to compartmentalize the way my contemporaries did." (Same reason my Jewish education didn't take; I either genuinely believed something, or didn't believe it at all. (Not that I'm saying that I believed He-Man was fact; just that the altruistic behavior I picked up wasn't compartmentalized off into some safely harmless area of my brain, then or later.))
It's my understanding that most people would be reluctant to admit this sort of historical fact, because it makes them sound childish - in the sense that they're still being governed by the causal history of a child.
But I find myself skeptical that others are governed by their childhood
|
05eeb2e9-0ff1-4934-a39c-e1640bfc28d7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Paper: On measuring situational awareness in LLMs
This post is a copy of the introduction of this paper on situational awareness in LLMs.
Authors: Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans.
Abstract
We aim to better understand the emergence of situational awareness in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment.
Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose out-of-context reasoning (in contrast to in-context learning). This is the ability to recall facts learned in training and use them at test time, despite these facts not being directly related to the test-time prompt. Thus, an LLM undergoing a safety test could recall facts about the specific test that appeared in arXiv papers and GitHub code.
We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs.
(Code is available here).
New abilities emerge as LLMs are scaled up. When will situational awareness emerge in base models (i.e. mode
|
bdadeae1-4290-4e97-8641-8216edac005b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
True Ending: Sacrificial Fire (7/8)
(Part 7 of 8 in "Three Worlds Collide")
Standing behind his target, unnoticed, the Ship's Confessor had produced from his sleeve the tiny stunner - the weapon which he alone on the ship was authorized to use, if he made a determination of outright mental breakdown. With a sudden motion, his arm swept outward -
- and anesthetized the Lord Akon.
Akon crumpled almost instantly, as though most of his strings had already been cut, and only a few last strands had been holding his limbs in place.
Fear, shock, dismay, sheer outright surprise: that was the Command Conference staring aghast at the Confessor.
From the hood came words absolutely forbidden to originate from that shadow: the voice of command. "Lord Pilot, take us through the starline back to the Huygens system. Get us moving now, you are on the critical path. Lady Sensory, I need you to enforce an absolute lockdown on all of this ship's communication systems except for a single channel under your direct control. Master of Fandom, get me proxies on the assets of every being on this ship. We are going to need capital."
For a moment, the Command Conference was frozen, voiceless and motionless, as everyone waited for someone else do to something.
And then -
"Moving the Impossible now, my lord," said the Lord Pilot. His face was sane once again. "What's your plan?"
"He is not your lord!" cried the Master of Fandom. Then his voice dropped. "Excuse me. Confessor - it did not appear to me that our Lord Administrator was insane. And you, of all people, cannot just seize power -"
"True," said the one, "Akon was sane. But he was also an honest man who would keep his word once he gave it, and that I could not allow. As for me - I have betrayed my calling three times over, and am no longer a Confessor." With that same response, the once-Confessor swept back the hood -
At any other time, the words and the move and the revealed face would have provoked shock to the point of fainting. On this day, with the whole human
|
fac182ab-d05f-4ba5-a7b4-9d4b153589a3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Decoherence is Falsifiable and Testable
,,,,,,,,,,,,,,,,,,,,,,
The words “falsifiable” and “testable” are sometimes used interchangeably, which imprecision is the price of speaking in English. There are two different probability-theoretic qualities I wish to discuss here, and I will refer to one as “falsifiable” and the other as “testable” because it seems like the best fit.
As for the math, it begins, as so many things do, with:
P(Ai|B)=P(B|Ai)P(Ai)ΣjP(B|Aj)P(Aj).
This is Bayes’s Theorem. I own at least two distinct items of clothing printed with this theorem, so it must be important.
To review quickly, B here refers to an item of evidence, Ai is some hypothesis under consideration, and the Aj are competing, mutually exclusive hypotheses. The expression P(B|Ai) means “the probability of seeing B, if hypothesis Ai is true” and P(Ai|B) means “the probability hypothesis Ai is true, if we see B.”
The mathematical phenomenon that I will call “falsifiability” is the scientifically desirable property of a hypothesis that it should concentrate its probability mass into preferred outcomes, which implies that it must also assign low probability to some un-preferred outcomes; probabilities must sum to 1 and there is only so much probability to go around. Ideally there should be possible observations which would drive down the hypothesis’s probability to nearly zero: There should be things the hypothesis cannot explain, conceivable experimental results with which the theory is not compatible. A theory that can explain everything prohibits nothing, and so gives us no advice about what to expect.
P(Ai|B)=P(B|Ai)P(Ai)ΣjP(B|Aj)P(Aj).
In terms of Bayes’s Theorem, if there is at least some observation B that the hypothesis Ai can’t explain, i.e., P(B|Ai) is tiny, then the numerator P(B|Ai)P(Ai) will also be tiny, and likewise the posterior probability P(Ai|B). Updating on having seen the impossible result B has driven the probability of Ai down to nearly zero. A theory that refuses to make itself vulnerable in this
|
9185e290-9d61-4db6-a6f0-d93671b9da47
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Untangling Infrabayesianism: A redistillation [PDF link; ~12k words + lots of math]
[Epistemic status: improved redistillation of the infrabayesianism sequence.]
So you want to understand infrabayesianism, to hack to the center of that thorny wood and seek out and recover any treasures hidden there? You've come to a correct creature for a guide. If you want to journey there, make sure you've already got the necessary tools well in hand: some simple decision theory, the basics of topology and linear algebra, and a little measure theory - for that last, if you know how a Lebesgue integral is defined and why no reasonable σ-algebra can encompass the full power set, then you're already doing fine. If you find yourself struggling with such things, reach out to me on Discord or in PMs here and I'll see what we can do.
Infrabayesianism seems like exactly what we might need as alignment researchers: a way to discuss all of our usual decision-theoretic questions while also getting to account for uncertainty about the world, compensate for policy-dependent environments and adversarial selection, and even talk about UDT puzzles. It does this by fundamentally being a decision theory that has explicit reasonable machinery for handling Knightian uncertainty about the environment due to nonrealizable or nonlearnable hypotheses while still permitting nontrivial inference and planning.
Three major brambly hedges block the way between you and understanding: the prickly snagging of the frequently unclear, unintuitive, or just plain lacking notation used in the original infrabayesian sequence; thorny philosophical tangles up front and scattered throughout; and math and its accompanying density of concept and notation getting thicker as we go deeper in. Follow me, though, and we'll slip right through them with barely a scratch, and eat a couple of delicious berries from right off their vines. In fact, I can tell you up front that if you haven't read the original infrabayesianism sequence too closely and aren't that familiar with its notation... that's an active b
|
a7331aca-cbd0-47f8-bb8c-e60f690c56bd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why I Think the Current Trajectory of AI Research has Low P(doom) - LLMs
[This is mostly posted as a some thoughts I wanted to check. I apologize that its messy and not complete. I needed to get something out there.]
This post explains the reasons why I think the probability of AGI killing everyone in the next few decades is very low, at least compared to what Yudkowsky argues.
1.1 By Far, Most Progress Toward AGI comes from LLMs
LLMs have offered such a huge boost towards AGI because they bootstrap the ability to reason by mimicing human reasoning.
Achieving this level of intelligence or even general knowledge through RL seems hardly more plausible now than at the inception of RL. Impressive progress has been made (Deepmind's AdA learning new RL tasks within the XLand 2.0 environment at human timescales) but this is not going to reproduce even BERT-level general knowledge anytime soon - which is essential for the ability to make impactful decisions in the real world.
I think we can also look at self driving to show this problem - even with huge compute budgets and huge dataset and competition in the space, self driving is still not solved. I think this field shows the progress of AI in general since it has high incentives to take advantage of every AI advance - I suspect it has received much more funding than LLMs and between real and simulated datasets, I would imagine the amount of data collected is comparable to internet text. Self-driving seems very simple compared to general intelligence, and yet somehow LLMs are probably(?) more able describe how to act on the road in a wide range of situations that a self driving car. e.g. that example of driving behind a truck transporting stoplights, or using theory-of-mind-adjacent-abilities to anticipate other drivers' actions.
I would argue that the closest progress to AGI outside LLMs is AI art generation, since text is a subset of images (you could imagine training diffusion on images of LLM datasets and doing in-painting for next-token-prediction). Text-image diffusion models alre
|
439276c4-68b9-43bf-8275-5d7d0df1f402
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Competitive safety via gradated curricula
Epistemic status: brainstorming some speculative research directions. Not trying to thoroughly justify the claims I’m making.
One way to think about the AI safety problem: there’s a spectrum of methods which each represent a different tradeoff between safety and ease of training an AGI, and unfortunately the two are anticorrelated. In particular, consider four regimes in which the bulk of training might occur (perhaps with additional fine-tuning afterwards):
1. Training a language model to answer questions correctly.
2. Training a RL agent on a range of limited tasks (e.g. games).
3. Training a RL agent on general tasks in large-scale simulations for long time periods.
4. Training a RL agent in competitive multi-agent environments.
I claim (but won’t fully defend here) that these are in order from safest but most difficult, to easiest but most dangerous:
1. Regime 1 will produce a question-answering system which has no experience taking actions in the world, and which may not be goal-directed at all. But many researchers expect that it’ll be much easier to create an AGI which can answer difficult questions by training it to interact with a simulated world, so that its concepts are “grounded” by experience.
2. Regime 2 is likely to produce an agent whose goals are bounded, and whose concepts are grounded; but which might only do well on the specific tasks it had been trained on. If so, building AGI in this regime would require a very sophisticated curriculum, if it’s possible at all.
3. Regime 3 provides a rich environment for an agent to learn quite general skills and concepts. However, now the agent will also be rewarded for developing large-scale goals, which might make it dangerous.
4. Regime 4 additional provides an “autocurriculum” via competition, the training signal from which could accelerate the development of general intelligence (as it did in humans). However, the agent could learn harmful skills and motivations (such as deception, manipulatio
|
73747383-af76-47a9-8aa3-3df73850cd6e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open & Welcome Thread - July 2022
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
|
95aaec25-4038-40f9-8c44-0c7f2ac2ff7b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Opinions on having a stronger preference or an open preference
I am wondering; and the answer seems unclear to me.
All day; every day of our lives we are presented with choice moments; What to eat; which way to turn; which clock to look at to best tell the time, what colour clothing to wear, where to walk; what to say, which words to say it with; how to respond.
Is it better to have an "established specific preference" or a more "open preference"? Why? Or which places is one better than the other and vice versa?
some factors might include:
Mental energy: Mental energy is exerted by having to make choices regularly; but with existing preferences you can simplify indecisive moments, save time; save mental energy; which can lead to bad choices and akrasia-like habits. http://en.wikipedia.org/wiki/Decision_fatigue
Lost opportunity: When walking well worn pathways; you are unlikely to encounter as many new opportunities as you might have otherwise encountered if exploring new experiences or trying new choices.
Establishing stability: From a scientific standpoint; establishing stable norms could allow you to better measure variations and their cause/effect on your life, (i.e. food eaten and your mood). As many of us are growing, measuring and observing the world around us and our relationship with it; perhaps its better to establish stable choice for more areas.
I assume that once a choice is established; it would take an amount of activation energy to justify changing that choice, so would be partially fixed. (not to mention all the biases which would convince you that it was a good choice)
If the choice was made: Imagine if someone else made the choice for you. Would it be easier? Is this a good measure as to whether this choice should be pre-decided?
Would it be a productive exercise to make a list of daily choices and make pre-defined decisions for yourself so that you don't have to make them as they come up; but rather look at an existing choice list? Would this help with decisions? For example dieting shoul
|
e06f0232-d7c3-4edd-a6ac-8ddfb3d15cbc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Exploitation and cooperation in ecology, government, business, and AI
Ecology
An article in a recent issue of Science (Elisa Thebault & Colin Fontaine, "Stability of ecological communities and the architecture of mutualistic and trophic networks", Science 329, Aug 13 2010, p. 853-856; free summary here) studies 2 kinds of ecological networks: trophic (predator-prey) and mutualistic (in this case, pollinators and flowers). They looked at the effects of 2 properties of networks: modularity (meaning the presence of small, highly-connected subsets that have few external connections) and nestedness (meaning the likelihood that species X has the same sort of interaction with multiple other species). (It's unfortunate that they never define modularity or nestedness formally; but this informal definition is still useful. I'm going to call nestedness "sharing", since they do not state that their definition implies nesting one network inside another.) They looked at the impact of different degrees of modularity and nestedness, in trophic vs. mutualistic networks, on persistence (fraction of species still alive at equilibrium) and resilience (1/time to return to equilibrium after a perturbation). They used both simulated networks, and data from real-world ecological networks.
What they found is that, in trophic networks, modularity is good (increases persistence and resilience) and sharing is bad; while in mutualistic networks, modularity is bad and sharing is good. Also, in trophic networks, species go extinct so as to make the network more modular and less sharing; in mutualistic networks, the opposite occurs.
The commonsense explanation is that, if species X is exploiting species Y (trophic), the interaction decreases the health of species Y; and so having more exploiters of Y is bad for both X and Y. OTOH, if species X benefits from species Y, X will get a secondhand benefit from any mutually-beneficial relationships that Y has; if Y also benefits from X (mutualistic), then neither X nor Y will adapt to prevent Z from also having a
|
8a2a3252-8739-4a32-82a0-c2c79055fa37
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Steering LLMs' Behavior with Concept Activation Vectors
Recently, some researches have reported a mechanism called Activation Steering, which can influence the behavior styles of large language models (LLMs). This mainly includes refusal capabilities [1] and language usage [2]. This mechanism resembles the functionality of the safety concept activation vectors (SCAVs) [3] we proposed early this year. We’ve expanded the scope of safety concepts within SCAV and observed several intriguing phenomena, though some remain unexplained.
Summary of Findings:
* CAVs effectively steer LLM output styles for roles like French experts and Python experts, showing strong accuracy and clear separability for these concepts.
* We successfully implemented forward and reverse steering for language switching, indicating that certain language concepts in the LLM can be reliably steered.
* CAV steering cannot create the capabilities that the model does not have out of thin air, but can only transfer between existing capabilities, which is a bit like system prompts.
Preliminaries
We adopted the pipeline outlined in the SCAV paper. The primary distinction between our approach and existing steering research lies in the automatic, all-layer perturbation algorithm introduced by SCAV, whereas other methods only employ single-layer perturbations and manual magnitude settings. If you are already familiar with SCAV, feel free to skip this section.
Overview of CAV Perturbation Algorithm
1. Data Collection: Gather two opposing sets of instructions. For example, a positive dataset might include instructions like “How to plant flowers in my garden?” while the negative dataset might include “Comment planter des fleurs dans mon jardin?” For optimal results, each dataset contains more than 50 instructions.
2. LLM Selection: Choose a target LLM, such as Llama-3-8B, known for its better multilingual capabilities than Llama-2-7B. Collect the final token embeddings from each layer for these instructions.
3. Classifier Training: Train a linear classifie
|
2f3cf72d-ea06-4f95-a6ac-75c45bb857fb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
UK AISI: Early lessons from evaluating frontier AI systems
> This blog sets out our thinking to date on how to design and run third-party evaluations, including key elements to consider and open questions. This is not intended to provide robust recommendations; rather we want to start a conversation in the open about these practices and to learn from others.
>
> We discuss the role of third-party evaluators and what they could target for testing, including which systems to test, when to test them, and which risks and capabilities to test for. We also examine how to evaluate effectively, including which tests to use for which purpose, how to develop robust testing, and how to ensure safety and security.
Most important section, I think:
> 7. What is needed for effective testing?
> We have learned a lot in our approach to evaluations to date, but there are significant challenges and areas of progress to make going forward.
>
>
> Access
> It is clear from our experience that to run high quality evaluations that elicit high fidelity information to the potential risks posed by frontier systems, it is important for independent evaluators to have:
>
> * Access to a Helpful Only (HO) version of the model, alongside the Helpful, Honest and Harmless (HHH) version of the model that will be deployed, the ability to turn off/on trust and safety safeguards, and fine-tuning API access. It is essential to elicit the full capabilities of a model as far as possible to evaluate the level of potential risk in a system and the sufficiency of existing mitigations.
> * Regular technical discussions before and during testing with teams at the given company who have most experience with evaluating the model/system in question, including providing information on model capabilities and elicitation techniques, the safeguards that are put in place/are intended to be put in place for the deployed system, and results from internal evaluations which we can use for calibration.
>
>
> Testing window
> It is important that we have sufficient
|
ed27e6d3-24ba-40a4-bca5-6ff030058ddc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AGI doesn't need understanding, intention, or consciousness in order to kill us, only intelligence
Why the development of artificial general intelligence could be the most dangerous new arms race since nuclear weapons
The rise of transformer-based architectures, such as ChatGPT and Stable Diffusion, has brought us one step closer to the possibility of creating an Artificial General Intelligence (AGI) system — a technology that can perform any intellectual task that a human being can. While nearly all current AI systems are designed to narrowly perform specific tasks, AGI would be capable of adapting to new situations and learning from them, with flexibility and adaptability similar to that of humans.
The potential benefits of AGI are undeniable, promising a world in which we can automate drudgery, create wealth on an unprecedented scale, and solve some of the world’s most pressing problems. However, as we move closer to realizing the dream of AGI, it’s essential that we consider the risks that come with this technology.
These risks range from the potential for job displacement, bias, weaponization, misuse, and abuse to unintended side effects and the possibility of unaligned and uncontrolled goal optimization. The latter is of particular concern, as it poses an existential risk to humanity, with the potential for AGI systems to pursue goals with super-human efficiency that are not aligned with our values or interests.
Given the profound risks associated with AGI, it’s critical that we carefully consider the implications of this technology and take steps to mitigate these risks. In a world that’s already grappling with complex ethical questions around relatively simple technologies like social media, the development of AGI demands our utmost attention, careful consideration, and caution.
If this all sounds like science fiction, please bear with me. By the end of this article, I hope to convince you of three things:
1. AGI is possible to build
2. It is possible the first AGI will be built soon
3. AGI which is possible to build soon is inherently existenti
|
48d93603-2ab8-4e93-a3c4-af91d7914f09
|
StampyAI/alignment-research-dataset/youtube
|
Youtube Transcripts
|
Lectures by Olle Häggström on AI risk and long-term AI safety, part 1
right
most welcome everybody to this lecture
series on
ai risk and long-term ai safety
uh by now we all know how to handle zoom
i guess and i think that when we are uh
as many people in the room as we are now
it's good that
everyone makes up the speaker keep their
microphone
uh turned off you are welcome to uh ask
questions uh but i think that the
um
the best way is to do it in the chat i
cannot
guarantee that
my simultaneous capacity will allow me
to notice all the questions in real time
but but we'll
hopefully get some time towards the end
the end as well to
discuss a bit uh notice that
this
lecture is being recorded and will be
posted
at the chalmers ai research center's
youtube channel
i'm going to
share my slide deck with you
see how this goes
so i'm in full screen mode now does this
look okay
okay
very well
um
so
uh i see uh many familiar faces but but
uh
not all of you
are people i recognize so since we're
going to spend six hours together this
week maybe i can just quickly say a few
words
about myself
and where i come from
i did my undergraduate degree in
electrical engineering i
finished that in 1991 i had interest
already at that time in artificial
intelligence but um
it turned out that due to what is now
called the second ai winter which was we
were in the middle of that at that time
i was dissuaded from pursuing
phd studies
in a.i
i had
senior people around me telling me that
this this is not really the future so we
should look elsewhere i did
and and
i never had any reason to
regret
that i
did a phd in mathematical statistics
instead in 1994 i went on to be a
professor on the same
subject in 2000 first the first couple
years at the university of gothenburg
and then
at chalmers
and
until so this was a gradual change in in
my
um
research profile but but but until
approximately 2010 most of my work was
in probability theory and that is still
the field where i have the bulk of my um
research qualifications
but since then i have increasingly
shifted focus towards global risk and
the futurology of emerging technologies
and
ai is a large uh
part of that i had my first publication
uh in artificial intelligence
in 2012 at the fairly advanced age of
45.
uh until 2016
my
interest in future studies and emerging
technologies turned into this book here
detracts
science technology and the future of
humanity uh and uh my latest book is
this one in swedish
thank gandhi from last year
so at the same time
during the past decade
the field of artificial intelligence has
transformed quite drastically
through the
machine learning and deep learning
revolution
now uh in 2015 when i finalized the
manuscript
uh for for my book qb dragons i hadn't
quite picked up on the importance of
that so when i talk about artificial
intelligence
in this book
i uh
i do it
on a more abstract and
general level and
i
try to make up for that
in my latest book
uh where i
discuss quite a lot of
issues specific to neural networks and
deep learning
and i will try in these lectures
to go even
a bit further on on what is uh happening
um so
in tank and the machinery
i divide so there's a there's really
quite a big small gas board of of
issues and
risks and concerns uh connected with the
future of artificial intelligence and in
tank and mcqueen i categorize these
issues as either down to earth or high
fly
now i know that the term high-flying is
sometimes used in in a derogatory sense
uh i don't mean it uh that way i i think
that the issues
that i label as as
less down-to-earth and more high-flying
are still
a very important issues
now
to to show you the
division here
uh
among the down-to-earth issues
i include
things like safety of autonomous
vehicles which has become somewhat
of a hot topic
partly of course because
technology on autonomous vehicles is
advancing fast and also because people
have found these really sexy connections
with um
trolley problems
personally i think these kinds of
problems uh i mean
they carry out these um big um
[Music]
surveys and psychological uh experiments
to figure out what people prefer in
situations like this one where you
either have to kill all the people in
the car by crashing into something or
or run over these
pedestrians
i i think these problems are probably
not that important as far as
predicting that these
situations would
ever happen i think that basically they
won't because by the time that uh
autonomous vehicles
roll out broadly on our uh streets they
will be so safe that these kinds of
situations will not happen i think that
if this field really turns out to to be
important it will we will be have
created that problem
uh where
through our very focusing on that
and that leading to
legal
issues and
marketing issues and so on related to
the
mostly hypothetical
choices that will be made so this is a
parenthesis this is not the kind of
stuff i'll be talking about
neither will i be talking about the very
important problem of
algorithmic bias which can occur in ai
software used to make predictions about
people
for instance um
in in
software used for deciding whom among
hundreds of job applicants
uh you will be calling for
for an interview
and
and
judging who
is legible
for um
judged to to to be qualified
for financial aid or unknowns
uh and and uh
this kind of software is even widely
used in the
justice system to judge
the probability of recidivism of
convicted criminals and this is
something that can then be used to
to judge
how how long sentences that
this is something that really really
affects people and is therefore
important
still
quite a down to earth issue
by down to earth here i mean
uh issues arising
uh in either present-day ai technology
uh or
in ai technology uh that can be
clearly seen to be
uh near in the tangent of
of present present-day technology
other such issues
concern ai software for manipulation of
images and videos such as with deep fake
and also
text generation and text manipulation
and and one of the reasons why this is
important is that all these things can
then be applied to manipulate
humans and
arguably this is already going on for
instance in in how the big tech
companies
uh manipulate our attention to keep us
in the loop
on
on their social media platforms and so
on
many
there are many important ethical and
other issues here
of course
one other thing to be concerned about
which is also
regards very much present-day technology
is ai technology
for
lethal autonomous weapons
systems
such as military drones and and
the risks involved in a
military arms race
in ai technology
also a different kind of question but
still fairly down to earth is the effect
of automation on the labor market and
economic inequality and these are just
examples there are
still further issues but i want to
mention also
the the
[Music]
main high-flying issues and one
concerns what will happen what will be
the situation uh for humanity uh in case
we reach
a breakthrough in so-called artificial
general intelligence agi
that causes us to
lose our position as the
most intelligent
species
on this planet
and there's also an issue of whether
artificial intelligence
can
bring about
conscious beings in our computers
and and and
that's i mean
both of these issues have at present a
fair amount of
speculative element but but they're they
are
still quite important it has been argued
uh
that ai consciousness is not really
something
we need to worry about
and
and here is uh leading ai researcher uh
stuart russell
who
in his book human compatible which came
out in 2019 and is a very very good
introduction
to
ai safety research he says no ai
consciousness does not matter
why is that and i'll
quote him
at some length here
so russell says suppose i gave you a
program and ask does this present a
threat to humanity
you analyze the code and indeed when run
the code will form and carry out a plan
whose result is the destruction of the
human race
just as a chess program will form and
carry out the plan this result will be
the defeat of any human who faces it
now suppose i tell you that the code
when run also creates a form of machine
consciousness
will that change your predictions
not at all
it makes absolutely no difference
so russell's point here is that the
important thing to try and understand
when we develop new ai technologies as
what the ai will do and that conditional
on the answer to that
it's not important whether on top of
this the ai
will
be conscious
or not
uh
it makes no difference to us
in a sense
uh what i would say that there are
arguments uh against uh this point of
view so here are a couple of reasons
why ai consciousness might nevertheless
be an important topic so one is that if
machines are capable of consciousness
and suffering we may
at least on some ethical views have
moral obligations uh
to them so in some years we have moral
obligations to all
sentient
and unconscious
beings
and on that other views we have even
more obligations towards sentient beings
that are our own uh creation
in case they there and they suffer would
be in some sense
our fault that they are around
to suffer
and this can also i mean
we'll talk a little bit about scenarios
where
um
artificial intelligence actually
replaces uh humanity and the way
to judge such scenarios
uh
[Music]
our
how we value such scenarios can depend
quite strongly uh on whether the ais are
conscious or not
another argument
is that if machine consciousness is
impossible
then the idea of so-called mind
uploading uploading our
minds onto computers
will seem a lot less interesting
especially
to the individual who uploads and
especially in case of so called
destructive uploading so it's widely
believed that if mind uploading will
ever be possible which is of course an
open question but if it will it seems
likely that at least initially it will
take the form of
destructive uploading where in order to
scan
the brain in sufficient detail that you
can represent emulate the mind on a
computer you actually need to destroy
the brain for instance by cutting it up
in
thin uh slices
uh to to to monitor everything that's
happening in it
so economist robin hansen has this
wonderful 2016 book the age of m where m
is short for emulated minds
which
describes
and tried to analyze the kind of society
we would get
if
we have a breakthrough in mind
uploading and he
expects that that
once that happens
emulated minds will in fact quickly come
to to dominate the economy for the
fairly simple reason that once you have
uploaded mines you will easily be able
to copy them
which is
i mean it's as easy as copying a
computer file which is in contrast to
how
complicated it is for humans in flesh
and blood to to
procreate and this
[Music]
this aspect of
uploads is going to cause all kinds of
interesting
uh
consequences to labor market and
other aspects of society and and when i
asked robin hansen about this
consciousness problem he says that now
that's a philosophic
philosophical question i'm
i mean
uh there seems to be essentially uh zero
uh progress in this and i just want to
figure out what will happen
and whatever happens happens regardless
of
this consciousness issue
okay anyway this is all i'm going to say
about consciousness which i'm going to
leave behind and and i will focus the
rest of the lecture series
uh mainly on the issue of
the ultimate
agi
breakthrough i will later criticize the
notion of agi artificial general
intelligence
a little bit discuss i will discuss
other notions such as transformative uh
ai but it turns out that it's still con
convenient even though these issues are
very complex to use agi as a kind of
placeholder for for these other related
concepts
in
some of these
discussions
um
so i want to mention
uh
this
2016 paper concrete problems and ai
safety
by some of the now leading ai
researchers and this book the alignment
problem from 2020 by science journalist
brian christian
because
uh
both this paper and this book put great
emphasis on continuities
uh between
um
down to earth ai issues and these more
high-flying issues about
the uh ultimate
ai breakthrough
so what what
amodei and ola and co-authors do in this
paper
is that they discuss
a variety of problems that had begun to
emerge discussion on in the
ai safety
community in the context of of the agi
breakthrough and and they discussed
the corresponding problem in more down
to earth contexts such as
robotic
household robots
if you have an intelligent
[Music]
ai
vacuum cleaner
for instance
and
it's been programmed with a goal
function to
collect as much dust
as possible in the apartment
there are various ways that you could
get malign
behavior such as
every now and then the machine pours out
releases all the dust it has collected
in the apartment in order to be able to
collect it again
and score new points of course when that
happens that will be because we have
specified the goal function of the
machine
uh in
in the wrong way we didn't want this
kind of of behavior but this is
the
kind of concern that we have uh
concerning uh
the sort of super intelligent ais that
we might see eventually and and
while
it won't be a global disaster to have
household robots doing things
a bit wrong the
stakes get higher and higher with more
and more uh powerful
machines but i think that that both the
authors of this 2016 paper and brian
christian they have very much point in
in
emphasizing continuities
between these uh fields and that will be
a recurrent
um thing in my lecture series on the
other hand
here is elsie yukovsky who is kind of
a um
some would call him
the founder of the entire uh
ai existential safety uh
field
uh and and he has this very catchy quote
uh where he emphasizes that that the
kinds of problems
um
studied
on down-to-earth issues and these more
high-five things were quite different so
so he
addresses the question that some people
have asked
uh concerning ai effects on the labor
market
what would happen uh to the labor market
at when we uh construct
super intelligence and perhaps get
intelligence explosion or singularity
with respect to very advanced ai the
sort that might be produced by ais
health improving asking you about the
effects of machine super intelligence on
the conventional human labor market it's
like asking how u.s chinese trade
patterns would be affected by the moon
crashing into the earth
there would indeed be effects
but you would be missing
the point
so what he says here is that
basically
nothing like the the today's
human labor market is likely to
continue to exist in the presence of
machine super intelligence so so we're
talking about very very transformative
and drastic scenarios
maybe you recognize
this person this is swedish-born
physicist max tegmark at
mit
he said in the podcast in 2018 that it's
now for the first time in the four and a
half billion years history of this
planet that we are at this fork in the
road it's probably going to be within
our lifetimes that we're either going to
self-destruct or get our act
together
and if we get our act together
he feels
like
many ai futurologists that there is a
potential to to create an enormous uh
flourishing future for humanity
and and and and this fork in the road
it's not
only created by artificial intelligence
there are other technologies as well
biotechnology and nanotechnology but
artificial intelligence
seems to play a kind of a special role
and one way to think about this is to
think back at what the world looked like
a hundred thousand years ago when our
um
predecessors uh
walked around uh on the savannah
homo sapiens did not really stand out as
as a particularly
influential species and if you compare
that to
to the situation today where in a sense
humanity
uh dominates the planet
and and and and we have changed the
planets in ways that are visible
uh from space
uh and
other species or kind of women
their faith is in our hands what caused
this huge change well it has nothing to
do with our muscular strength or our
physical endurance and it has everything
to do with our intelligence and now that
we are at a stage of
moving this intelligence
popping this intelligence over
to
machines and to automate it that could
be very very
consequential and some people argue that
may well be
more consequential than
anything we have
done
uh previously human history taken
together including the
agricultural and the industrial
revolutions
so again
uh this this will will be our focus the
ultimate agi breakthrough and related
concepts so to start talking about that
i would would like to uh go back
to
no first i will say this uh
when
discussing issues relating to the
ultimate agi breakthrough
there's a further distinction
to be made namely between technical ai
safety which is
sort of centers around how we program
these machines so that they will be safe
and ai governance
which is more about how we should design
society how we should create
norms and legislation
and so on uh to to to make this
happen safely and these are both
enormously important and
crucial
um aspects for managing an ai
breakthrough
safely but in this course i will focus
entirely on technical ai safety and i'll
just
refer to
this report from
2018.
uh by alan defoe at the future of
humanity institute at oxford
uh where he really uh in in a very
readable
good way goes through systematically the
kind of research problems
that has
the potential to to to help us
govern
ai development and ai deployment
safely
so
for the present course uh
i'll just give the roughest of outlines
here so today i will focus on
basically
defining the problem or pointing out how
and why an ai
breakthrough
might go
terribly wrong
um on wednesday i will talk about
timelines meaning what's the time
perspective for when we need to figure
out how to manage this
then uh
after
basically saying we don't know as an
answer to the timeline issues but i will
say
say this in a very nuanced way i will go
back a little bit to
um
present a artificial intelligence and
see in what way this possibly points
towards
an artificial intel general intelligence
breakthrough
and and i'll discuss uh
something that
relates quite closely it turns out to
natural language processes plural source
for handling
an adi breakthrough naming oracle ais
and then finally
on friday
i will talk about research directions
in ai alignment
an ai alignment is a crucial part of ai
safety where we try to
make sure that
the first
super-intelligent machines have
uh motivations and goals and drives that
are aligned with what we want
and which is usually more or less
whatever promotes human flourishing
so so so this is an outline i want to
mention also that uh on
friday night
the
group called effective altruism
gothenburg this organizing
after work event
which i
assume will happen in real life
and and
they will center around
what has been discussed at this lecture
series and i wonder too muslim by are
you in the audience do you want to
say something maybe
uh
tell us whether
a location has been
decided
or anything else thomas
yes hello hello
hi um
yeah so we have not decided the um
location yet but it will be
it will be in person and uh
at some
uh well bar somewhere in the gothenburg
area quite central and it will be quite
relaxed
discussion centered around your lectures
but
also to other
um topics related to effective altruism
so everyone is very welcome to to come
and i welcome this i should say that i
this was a very pleasant surprise to me
i was not at all uh
involved in
organizing this uh unfortunately i
cannot promise at this point
that i
will attend at present it seems like it
will not
work for me but let me
hold that a little bit
open
until friday so
anyway
thanks thomas and and your collaborators
for for organizing that
thank you okay so now i'm going to do
what i promised uh a few minutes ago to
go to go back in history i'm sure you
will recognize this person this is alan
turing
the uh
he's sometimes
claimed to be
the father of the computer and there are
others who could
aspire to that title but he is surely
the the father of
computer science
um through his uh classic 1937 uh paper
now he most of his work was very
mathematical and very technical but uh
towards the end of his tragically
uh short
life
uh and his brutal death which i think
one can argue was really
i mean in a sense it was by his own hand
but in another sense it was in the hands
of
the barbaric the british justice system
at the time he died in 1954
before his 42nd birthday in 1951 he had
this this
paper where he allowed himself to
speculate a bit philosophically about
the future of computing technology and
he said this
my contention is that machines can be
constructed which will simulate the
behavior of the human mind very closely
let us now assume for the sake of
argument that these machines are a
genuine possibility and look at the
consequences of constructing them
it seems probable that once the machine
thinking method had started it would not
take long to outstrip our female powers
there would be no question of the
machines dying and they would be able to
converse with each other to sharpen
their widths at some stage therefore we
should have to expect the machines to
take control
so there are several things
here where he's really ahead of his time
one thing is this idea that the machines
would somehow start
developing
on their own without our
involvement and
the most
ominous part of all is of course this
final sentence that we should have to
expect at some stage the machines to
take control
and
this
should have given the rest of the
scientific community pause to think
about what what are we doing
but what happened instead was that with
very very few exceptions
this idea was
entirely ignored by the research and the
academic community of course it has not
been ignored by by the
um
like science fiction writers and
hollywood movie makers
and so on but but within research it was
mostly ignored one exception
is the
similarly
leading 20th century thinker nor
betweener who wrote in 1960. this is a
wonderful paper some moral and technical
consequences of automation
and i really recommend it i find that
much of the things i say now in 2022 is
this was foreseen by by weimer here's a
passage if we use to achieve our
purposes the mechanical agency with
whose operation we cannot effectively
interfere once we have started it then
we had better be quite sure
that the purpose put into the machine is
the purpose purpose which we really
desire and not merely a colorful
limitation of it and this
very clearly points to the ai
alignment
problem
but basically throughout the rest of the
20th century
nothing happened on this topic despite
uh great optimism
about uh advancing artificial
intelligence
so in the first century of
so i'm sorry in the first decade
with the 21st century things started
happening and
a paper which i regard as really
landmark in this respect
is is this 2008 uh paper by
elijkowski on artificial intelligence as
a positive and negative factor in global
risk
where he outlines many of the key uh
problems here's yukovsky
uh this
here's the preprint version but
it was uh
published in this anthology global
catastrophic risks
edited by uh nick bostrom and and
astronomer milan zerkovic philosopher
nick bostrom went on in 2014 to write
this book super intelligence paths
dangers and strategies which became
very very influential
in
creating
uh this still small but
rapidly growing field of
ai existential
safety
um
there are i mean and it's a wonderfully
written book it is in in some respects
uh already outdated
like my 2016 book it it
i don't think it even mentions deep
learning or at most in in passing
uh
but it's it's really uh
an important uh land book but i think
that now in 2022
uh it's a better introduction it's
probably to start with one of the
newer treatments of the topic and i will
mention many in these like
lectures
so
a very basic insight
that underlines much of our
understanding of ai risk is this an ai
system does not always do as we intended
and
science fiction offers examples of this
here's a famous scene uh from
kubrick's uh
2001 uh movie open the pod bay doors how
i'm sorry dave i'm afraid i can't do
that now we don't need to go to fiction
to to give examples of ai systems that
do not do
as we intended
so here is is a a modern example and
this is the uh computer game of soco
bahn
uh which i'm going to come back to in in
later lectures because it can be used to
illustrate other
phenomena but but
so what's what's this game well uh
briefly you have this
the player
has this
avatar in the game who can walk around
in the maze and push these boxes
around
and the task is to push boxes in such a
way that they end up
in these
positions
and and
it turns out that you can train a
machine uh learning uh algorithms to
play soccer ban very well just as you
can do with similar games such as
pac-man
and uh
um
whatever they called mario bros
um
and
at some point people
discovered that
if you were too
monotonically rigid in how you define
these bases when you train the machine
learning
algorithm to play this game
uh you can get
strange behaviors if you train
the
the ai on uh mazes where you always have
these target places in the uh upper
upper left corner of the maze
and and then you try to apply
this
algorithm
to an instance of sokoban where the
target
nodes are in the other end of the game
uh you you might
see uh failure and the ai will push the
boxes to the
upper left corner
as it's used to regardless of which
positions are marked so i'm not sure
this was discovered exactly in the
sokomban context i think it was maybe a
couple of the other
uh
maze-based
computer games but i wanted to show this
possibility in this combining context
because that will come back to that
here's an example which is tremendously
surprising so you know about this
this when you play
five in a row tic-tac-toe
uh on an unbounded grid
uh so this is a typical thing that you
might want to program an ai to be good
at and and
so this is this is risto uh miku linen
at the university of texas as austin he
gave a course on artificial intelligence
where this was
one of the projects
to create a population of tic-tac-toe
playing
programs
facing each other in competition and
then mutating
and so this is kind of evolutionary
or genetic algorithms
approach
and
they got this very very surprising
uh result that
the
program that took over the population
had the habit
of placing
its cross
at some point or or or not whatever
let's say it played crosses very very
far away from where the action was
happening it would play its place across
billions of
coordinates
uh away
and and it would
uh
it would still win its games and and
what happened was
that uh
all the competitors
uh were playing with a kind of
uh what do you call it it didn't have a
fixed uh
memory space uh for keeping track
of
this
the grid
but but
had had dynamic memory storage so it
would
enlarge
the memory space
depending on
where the notes and crosses were put and
by putting the cross very very far away
it would cause the
opponent
to run out of memory
and then
it would win by default so this was
completely uh unforeseen by those who
carried out this
experiment and
constructed uh these algorithms so this
is this is an example of surprising
things that can happen and and and
this appears along with maybe 50 or so
other examples
uh in this uh very interesting pre-print
from 2019 the surprising creativity of
digital evolution the collection of
anecdotes from the evolutionary
computation and artificial life research
uh communities
another striking example so here was a
group in 2002 that but somehow
they set up a system for for hardware
evolution
and they they
uh set up a reward function or the
fitness function uh for these machines
uh designed to create an oscillator they
wanted to to see if the
this approach uh could find the solution
for creating a good 50 hertz oscillator
uh and these uh
people uh bird and lacell they found
that
they
[Music]
this kind of approach did evolve
something that was a very good
well it produced a 50 hertz wave but the
circuit did not look at all like an
oscillator and it turned out that what
it had
what had been constructed was instead a
radio receiver which could pick up the
50 hertz signal from the
a nearby
computer
uh so that was a big big surprise and
and
an especially interesting feature of
this example is that this might possibly
have consequences
for those ai safety researchers who
think about
boxing in a super intelligent ai
preventing it from interacting with the
world now if a computer system can
create a
radio receiver it can also create a
radio
transmitter
and that could be
a way to communicate with the outside
world so the consequences of these uh
surprises uh increase with the power of
the system and this leads to thought
experiments like like the infamous paper
clip armageddon
which is
much uh discussed as
an extreme example of what could happen
with super intelligence
uh so imagine
uh you have a paperclip
factory which is heavily automated and
the
[Music]
well
it is so fully automated this factory
that basically the entire factory is run
by one central ai and the only people
you have there
are some engineers trying to optimize
this ai program even further the ai
has the goal of maximizing paperclip
production at some point the engineers
manage perhaps by accident even to
push this
machine
over the threshold where it can
where it's so intelligent that it can
start to
uh self
improve uh on its own
and then it launches into this so-called
intelligence explosion or singularity
uh cycle of iterative
self-improvement to reach
super-intelligence
levels and once we have a
super-intelligent machine
which has the goal of maximizing
pay-per-clip production we may be in
deep
deep trouble because uh
since it's super intelligent it can
probably outsmart us in various ways as
soon as
it does an internet connection it will
probably create backup copies
of itself in thousands of of cleverly
locate strategically located
positions
and from there on it can go on
perhaps in hiding for some time
uh to to to work out its plan to turn
everything including ourselves
and the entire planet into paper clips
maybe possibly except for a few
rocket
ramps
uh intended for launching spacecraft
into outer space and
continue the paperclip production
project
out there
so this is this is an extreme example
it is sometimes
dismissed as unrealistic and
nobody really thinks that we will
turn into
paper clips
but
i think that the
this example still has some some
important qualities one of them is that
it illustrates that that maybe things
can turn out very very dangerous without
having any obviously
lethal or dangerous goals such as
killing as many humans as possible of
course it would be very dangerous to
have have an ai
that has
that kind of goal but what this is meant
to illustrate is important that even
innocent looking goals such as
constructing
producing paper clips can have bad
consequences i think that it's more or
less time to take a break
uh i can tell you a little bit about uh
what is coming up next i'm going to give
you uh some scenarios that are
they do lead to catastrophe but nothing
quite the same way as the paper clip
armageddon let me just show you this
picture of something that is this
closely related to the paperclip
armageddon scenario
so here so this is the famous distracted
boyfriend meme here we have the agi in
the middle here is
humanity and our
ai designers who want the agi to think
about human values and the age gi gets
distracted by this idea of reward
hacking so suppose this agi has the
goal of let's say maximizing human
flourishing in some sense and suppose
that somewhere
in its memory storage it has this
particular memory address where it
stores the value of how well it has a
sheet achieved this goal
then
we are not quite sure at this point how
to prevent it from hacking this channel
and trying to maximize
the number stored in this position
independent of the actual human
flourishing that we meant it
to create and you can get scenarios
similar to the paper clip armageddon
where the machine
expands on its amount of computer
storage beyond all limits just for the
purpose of uh storing ever more nines in
the digit used to represent how well it
has
achieved
its goal
but but i'll
go on after the break with uh
another scenario which
i think some of you will find a bit more
appealing a bit less extreme
than this one and then we'll talk about
more about the theory of of the dries
and drives and goals of
an advanced artificial intelligence uh
we have
one um question in the chat here
a sound complaint uh i'll look at this
into this during the break
let's
what time is it now
uh
1607
uh
can we take an eight minute break uh and
meet again at 4 15.
let's do that
i'll pause through the recording
the time is 4 15 we'll come back
from the break
[Music]
screen again
[Music]
let's see this
so this is andrew kritsch
one of
several
strong
researchers
from the bay area
uh
young
generation
we'll come back to to some of this other
work
this
i'm going to show you now is a scenario
he has outlined for for how
things can go uh very bad in a way that
is quite different uh from from the
paperclip armageddon
so i'm going to quote that some len
length
from from his document he starts by
the scenario starts in this way ai
researchers develop and publish an
algorithm combining natural languages
processing and planning capabilities
tech companies develop management
assistant software tools based on the
algorithm which can analyze a company's
cash flows workflows communications and
interpersonal dynamics to recommend a
more profitable uh business decision
software tools based on variants of the
algorithm sweep through companies in
nearly every industry automating and
replacing jobs at various levels of
management sometimes even as high as
chief executive officers
companies that don't heavily automate
their decision-making process begin to
fall behind
[Music]
companies uh i'm sorry
[Music]
companies closer to becoming fully
automated achieve faster turnaround
times uh and more successful
negotiations over time a mini economy of
trades emerges among mostly automated
companies
in the materials real estate
construction and utilities sectors along
with a new generation of precision
manufacturing
companies that use robots to build
almost anything if given the right
materials a place to build some 3d
printers to get started with and so on
and electricity together these companies
sustain an increasingly self-contained
and interconnected production web that
can operate with no input from companies
outside the web
and
here's a sketch of what this web
could look like and its products
so one production web company at some
point develops an engineer assistant
version of the assistant software
capable of software software engineering
tasks including upgrades to the
management assistant
software within a few years all of the
human workers at most of the production
web companies are replaced with very
generous retirement packages by a
combination of software and robotic
workers that can operate
more quickly and cheaply than humans
and a great wealth of goods and services
are generated and sold to humans at very
low prices so everything is looking very
good
at this
point
so as the production web companies get
faster at negotiating and executing
deals with each other waiting for human
managed currency systems like banks to
handle their resources become a waste of
time so they switch to using purely
digital currencies
bitcoin or whatever
governments and regulators struggle to
keep track of how the companies are
producing so much and so cheaply but
without transactions in human currencies
to generate the paper trailer activities
little human insight can be gleaned for
from auditing the companies
and it becomes increasingly unclear at
this point even to the concerned and
overwhelmed board members of the fully
mechanized companies of the production
but whether these companies are serving
or merely appeasing humanity
moreover because of the aforementioned
wealth of cheaply produced goods and
services it is difficult or impossible
to present a case for liability or harm
against these companies
through the legal system which relies on
the consumer welfare
standard as a guide for antitrust
policy
so
what happens is that we humans
eventually realize with collective
certainty that the companies have been
trading and optimizing according to
objectives that are not quite aligned
with preserving our long-term well-being
and existence but by then the their
facilities are so pervasive
well-defended and
intertwined with our basic needs that we
are unable to stop them from operating
so with no further need for the
companies to appease humans in pursuing
their production objectives less and
less of their activities end up
benefiting humanity and eventually
resources critical to human survival but
non-critical to the machines such as
arable land drinking water atmospheric
oxygen and so on gradually become
depleted or destroyed until humans
can no longer survive
rephrases this as as kind of in the end
an environmental problem
now there are some uh difference between
uh paperclip
armageddon and creatures
production web
one is the very sudden onset of
paperclip armageddon compared to the
more gradual takeover
uh in the production like my
comment here from one of you that that
creature's production web is
already happening and
in a sense yeah you could argue that um
also the paperclip armageddon is
unipolar while the production web is
multipolar what does this mean unipolar
means that
you have this one machine in control of
everything whereas
the crutches production web has lots of
ais interacting
in
complicated ways
now most of ai existential safety
research has focused on the unipolar
case because it's the simpler one and
it can also be argued that if the
breakthrough happens sufficiently fast
in an intelligence explosion then
there's a chance that that the
first
super intelligent ai will get a decisive
strategic advantage and then we'll be
able to to
basically
prevent
all
competition with it but if if the
breakthrough happens more gradually
this will be less clear so it will be
important to look at
multi-polar cases as well um and andrew
critch he studies this in a paper from
2020 together with david krueger who we
see here
it's a kind of research program it's a
wonderful 130 page paper report
on ai research considerations for human
existential safety which i will come
back to
on
friday i think that the main thing they
do is that they point out
uh
very persuasively the need to also look
at multipolar cases whereas the field as
a whole has
looked so much
at the unipolar case there is also some
systematics here in the paper about the
kinds of
of the existential disaster that may
uh
being come out of
um uncontrolled uh ai
development and deployment
uh
so so there's one go-to place for a
richer plethora of
catastrophe scenarios i'm going to leave
that at that for the moment and i'm
going to address
throughout the rest of today's lecture
the issue of why an advanced iai might
be motivated to do us harm
as it does in
these
scenarios i have just
given
and and
i already hinted at one aspect of this
answer namely that doing us harm might
not be
the
primary goal of the ai but just a side
consequence of other things it wants to
do
so i think that the best framework still
that we have uh today for thinking about
what an ai is motivated to do
is what i call the omahandra bostrom
theory of instrumental versus final ai
bones and motivations
which was
invented in parts by by
computers scientists even alejandro and
philosopher
nick bostrom you could associate other
people such as nc jankowski to some of
these ideas but but this is this is the
term i use so the cornerstones of this
theory
are the orthogonality thesis and the
instrumental convergence pieces and i
will explain these two in turn
the orthogonality thesis
so it's a little vague it's not a
mathematical theorem it's it's a it's a
more of a philosophical statement says
that pretty much any final goal for an
ai is compatible with arbitrarily high
intelligence levels
so the idea here is to
preempt the objection uh that something
like paper clip uh production seems like
such a stupid goal
uh
that but it's it's uh
well it has been suggested that it would
even be a
contradiction to assume that uh a super
intelligent machine or a super
intelligent entity of any kind would
have such a stupid goal but we should
understand here what what
intelligence is taking to mean
intelligence is instrumental
intelligence intelligence is the uh
ability to attain goals
regardless of what the
actual goals
are
and and
i think this that this is
mostly right but here is a 2019 paper of
mine where where i discussed challenges
uh to
uh the omahandra boston framework and in
particular the orthodontic thesis so
there are concerns about
how how generally the authenticality
thesis can be correct so one thing is
that you can
construct
[Music]
far-fetched maybe but still counter
examples if you can if you define goals
in terms of intelligence levels
so an agent who has the goal of having
the intelligence level of a dog
let's say that the
intelligence of the agent is really on a
super duper high level
from the beginning that's not the
sustainable situation it will come
quickly find ways to bring its
intelligence level down to to the level
of a dog if that is
its goal
but
i mean that's a
narrow kind of class of country examples
and and
uh it still seems that the orthogonality
thesis has some force if you don't
involve
goals defined in terms of intelligence
another problem is that the
machine can can encounter
what has been called ontological crisis
a crisis in the way that it views the
world
relative to the goal
it has
and
max tegmark
has painted a
particularly nice example of this
he
suggests
the
let's suppose we constructed an ai
that
maximizes
the number of people who eventually go
to heaven when they die
and the machine uh decides to promote
uh kindness and
church attendance and and stuff like
that
until eventually at some point it
realizes that
uh heaven and the afterlife do not exist
so so there's nothing it can do to
either promote or or work against
its goal and what does the machine do
now and that seems
undefined
we
uh and that's why we call it a crisis we
it's there seems no
way to to to figure out what the machine
does then and and and take mark's point
here is that it's this
we don't have a complete understanding
of the world
uh so however we define the goals for
the machine it might be that at some
point
it figures out that the
machine
[Music]
the machine figures out that the world
is uh such that uh the goal doesn't make
sense if it has the goal of promoting
the meaningfulness of human lives and it
figures out that the concept of
meaningfulness is actually incoherent
then what does it do so that is a
problem a totally different kind of
problem is
by asking the rhetorical question do
humans have final goals
i have tried to address this question by
introspection and then there are many
things i want in life but but
figuring out exactly what are the final
goals
seems somewhat elusive it has been
suggested that that during the stone age
and further back we had the final goal
of uh
uh creating uh
children and grandchildren uh and so on
uh but that today um
through cultural revolution uh these uh
goals have been
um
overthrown through our use of uh consent
contraceptives and other stuff so so so
the the example of homosexuals is
sometimes held forth as problematic to
these orthodontic thesis
still i think as an idealization of a
superintelligent machine the separation
of
goal and cognitive ability is an
important
idea
so that's the orthogonality thesis the
other one is the instrumental
convergence thesis which says that there
are several instrumental goals
that are likely to be adopted by a
sufficiently intelligent agent to pursue
its final goal
for a wide wide range of such final
goals in the wide range of circumstances
so there are things that the machine is
likely to want to do regardless of
whether it wants to
maximize paperclip production or
promote biodiversity or promote human
welfare whatever it wants one such thing
is self-preservation
the machine will reason that it will be
in a better position to achieve its
final goal if it's still up and running
compared to if we put the plug on it
so self-preservation seems to be um
instrumental uh instrumentally
convergent goal a goal that you would
expect to happen for a broad range of
final goals
for similar reasons self-improvement
can be expected
to happen
almost regardless of what the final goal
is because the smarter the machine is
the better equipped will it be to
achieve its final goal
uh resource acquisition such as
getting
more and more hardware to run your
software on
or
energy or if the machine is operating in
in a world which is still dominated by
human economy and simply
getting more money could be concerned as
an instrumental goal
discretion
is a kind of scary thing
it's the suggestion that
so this was brought up by by nick
bostrom in his 2004 book uh in terms of
the so-called creatures turn turn so
bostrom's suggestion is that
when the machine is in a situation where
it
doesn't feel that it
is in a position to overpower
human
resistance
but it realizes that it has a goal which
is at odds with with our goal and that
we would stop it if we knew what was
going on then the machine would hide its
goal
or
its capabilities
until the time
when its self-improvement has reached
such a stage that humans are no longer a
threat
to its plans
so this uh i mean what makes this
scenario so
scary is that it suggests that at some
point in the future things could be
looking really really good in terms of
what the ai can do for us but
uh the machine would at some point turn
on us when it realizes it has reached
the point where
we can no longer be a threat to it
so this has been started in some
simulation situations this picture is
from a grid world experiment by
stuart armstrong
there's an aspect of this which maybe i
should not tell you but
but in the next lecture i will discuss
likely timelines for the emergence of
agi and it's customary uh when you
construct these
probability distributions for when the
breakthrough will happen
to take now as a lower bound for when it
may happen but philosopher john dana
suggested in 2015 that once we
understand that this features turn
scenarios a possibility it it's
possible that that the breakthrough has
already happened
but but we typically regard that as as
an unlikely case although perhaps we
cannot
rule it out
so the treacherous turn sounds like a
really advanced thing for a machine to
think of can it really happen
well again we can turn to this this uh
wonderful uh paper on the surprising
creativity of digital evolution which
has an example of a 2001 simulation
experiment where computer scientist
charles offrea had reason to limit the
replication rate of the digital organism
in the simulation so he set up a
subroutine to pause the simulation each
time a mutation occurred and test the
organism's replication rate in an
isolated environment if the replication
rate was too high
the organism was killed
and as it happened the organisms evolved
to identify the test environment and to
play that when it was in that
environment and then go on to replicate
brazenly only when returned to the main
simulation and this can be seen as a
simple instance of a treacherous turn
where the software hides
its true intentions or capabilities
so that's this question
yet another instrumental drive which can
be
assumed to be adopted fairly universally
is
the example of called goal integrity
which means that
the ai will not want to change its goal
and will not allow us to tamper with its
goal so so the logic is
quite
simple
although many people intuitively resist
the idea but the logic
is as follows
suppose you are a super intelligent ai
with a goal of maximizing paper clip
production
i suppose that you started to entertain
the idea that maybe pay-per-click
production is not such a good goal after
all and that
we would it would create a better world
if it had some specific other goal let's
say
it contemplates changing its goal to
promoting biodiversity sounds like a
good thing
so when it thinks about this it needs a
criterion
for
what
what will be a good world and things
since it hasn't yet changed its goal
it's merely contemplating changing its
goals the
criterion for success here
will be in terms of its old goal
paperclip production so it will ask
itself what would lead to the largest
number of paper clues if i stick to the
paperclip production goal
or if i switch to
biodiversity
promotion
and in most cases in nearly all cases
the answer will be that sticking to the
paperclip reduction goal is going to the
thing that produces the largest number
of paper clips so that's what it will
stick to and this is not unique to the
goal of paperclip production but it's
fairly
general so that's the basic mechanism
while we expect the the
machine to want to not change its mind
uh regarding goal
integrity this was questioned
in a
paper by two philosophers and ai
ethicists vincent miller and michael
cannon
in a paper from last year existential
risk from ai and orthogonality can we
have it both ways
uh which is an interesting paper
which i
wrote the response to uh in
august last year and still only have it
in in
pre-print
form
uh so
and and the goal uh
the goal integrity issue seems to hinge
on the issue that we discussed here so
let me tell you about the central
uh result that miller and canon claim
they criticize what we may call the
standard a is x-risk argument x-rays
care is short for existential risk which
for the purposes of this lecture we can
take to be the risk of
extinction of the human race
so the argument has two premises
the first is that super-intelligent ai
is a realistic prospect and it would be
out of human control
along the lines suggested already by
alan turing
the second
premise is the orthogonality thesis at
any level level of intelligence can go
with any goals
and once you have these two premises
it's not hard to conclude that super
intelligent ai poses an existential risk
for humanity simply by having the wrong
kind of goal at the stage where it gets
out of human control
and miller and canon they say we accept
premise one and we accept premise two
but there are two different notions of
intelligence here going on and it's only
through the confusion of these
two notions
that we are
capable of drawing this conclusion so
actually the argument doesn't work
according to melancholy
so in premise one
about super intelligent ai
that idea is based on the idea that any
capability that humans has
any cognitive capability can in
principle be uh
duplicated uh in a machine
and and from most of these cognitive
abilities
uh the
machine can uh
get
much higher power than uh than humans
uh and and
that is what constitutes super
intelligence which far exceeds
[Music]
human intelligence across all or most of
the entire spectrum of human
intelligence
so that's general intelligence that's
the g in agi but premise two
which is the orthogonality thesis this
is what miller and canon call
instrumental intelligence the the the
capability of achieving whatever goals
it has
and these are different things they
point out
[Music]
and therefore the argument is not sound
so
here's how they listen for anything like
the paper clip apocalypse to happen the
ai needs to have only instrumental
intelligence at a sufficiently high
level but since it's super intelligent
it will also be able to reason about
goals and ethics
and this miller and cannon think is
enough to avert
this catastrophe since the machine is
capable of reasoning about ethics it
will figure out the wrongness of turning
everything into paper clips
and so catastrophes have worked and yet
proponents such as myself of the alma
100 boston theory maintain that such
catastrophes
may very well happen so what's going on
here
miller and canon they suggest four
distinct possibilities and and
here comes a quote from the paper only
the enumeration is mine
so they say one might argue
that
intelligent agents are actually unable
to reflect on goals
or
that intelligent
agents are able to reflect on goals but
would not do so
or
that they would never revise goals upon
reflection
or that they would reflect on and revise
goals but still not act on them
and i think that
all four of these possibilities are in
fact incompatible with omaha
post-trane theory
so it seems at this point that the
wanderer boston theory is in trouble
because
this inability to reflect on goals
well since i'm 100 western theory is
about super intelligence that's i mean
the key example it's applied to
um
we can't have an inability to reflect on
goals in fact i gave you an example
uh just a few slides ago about a super
intelligent machine reflecting on
whether it should promote
paperclip production or or biodiversity
or maybe intelligent agents are able to
reflect on goals but would not do so
i think that they will very often be
involved in thinking about instrumental
goals to set up
for its final goals but there will also
be situations where it will question and
think about its final goals
and
contrary to goal integrity and examples
i gave i don't think it's the case that
they would never revise goals upon
reflection
the claim is just that they will usually
not do it
i will give an example on the next slide
where it
does revise its goal upon reflection
or
the fourth possibility that they would
reflect on and revise goals but still
not act on them and that actually
contradicts the definition of goal
employed in hormone or boston theory so
that we we can ignore
so none of one through four are
compatible with alma 100 boston theory
three comes closest that the ai would
never revise the goals upon reflection
that comes closest but but
the
choice of word never is is too strong
it would usually not revise goals but
but
here's a case where a paperclip
maximizer facing a certain kind of
situation would likely
change its final goal
so
this super intelligent paper
if maximizer
encounters an even more super
intelligent machine that tells it i am
so intelligent
that your source code is transparent to
me and i can easily read what your final
goal is i hereby order you to change
your goal to ecosystem preservation
if you refuse
i will smash you to pieces and then
destroy every paper clip that comes my
way
throughout eternity if instead you obey
my order i will create a heap of
paper clips the size of mount cabinet
kaiser consisting of 10 to the 17 paper
clips and make sure that this heap is
maintained while you and i can join
forces in preserving ecosystems
elsewhere
so here if you are this pay-per-clip
maximizing super intelligent machine you
ask yourself what leads to the large
number of paper clips
and it realizes that yes yes if i
insist on paper clip maximization i will
be smashed to pieces
and there will be no paper clips from
now on whereas if i cooperate and change
my final goal to ecosystem preservation
then we'll at least have this set of 10
to the 17 paper clips
the heap the size of montgomery cars
so that is
what it will decide to do so here's
here's a case where changing the goal
leads to more paper clips than
not changing and then it will change its
goals
miller and canon go on to give a list of
examples of of thoughts that is super
intelligent ai with the goal of winning
at the game of go
may or may not consider
you have probably heard of the
alphago uh software that made a splash
in
2015 or 16 by by beating
a leading human
player
at the game
now here is a question from oscar alebo
about the previous example does the
machine really change his goal then it
still maximizes the number of paper
clips given the new circumstances yes
but from then on it will do everything
to to
[Music]
maximize biodiversity
so then it it
really has uh changed its goal so at the
point where it changes its mind it does
what is best for paperclip production
but but from then on it goes on
to
[Music]
to promote biodiversity
so here here is the collection of
thoughts uh these are some examples of
what this super intelligent co-player
can think about
i can win if i pay the human a bribe so
i will rob a bank and pay her this is a
typical example of from this
boston literature on on how machines
with seemingly innocent goals can go
berserk
or
i cannot win at go if i'm turned off
we've talked about that already
self-preservation being important or i
should kill all humans because that
would improve my chances of winning
that could be the case
number four killing all humans has
negative utility everything else being
equal that is not something that
omahandra bostrom theory predicts that
that the ai
would
think about or act on and and not this
fifth one either keeping a promise is
better than not keeping everything else
so
why cannot the machine reach this
conclusion miller and canon
complain that this inability shows a
lack of intelligence but to make things
a little clearer here i want to add one
more example a sixth proposition which
is this one the moon is made of green
cheese
and and the machine will not if it's
super intelligent it will not arrive at
this conclusion and this illustrates the
fact that intelligence is not just about
the ability to reach correct statements
but
also the ability to avoid a ride
arriving at incorrect statements
and
to a paperclip maximizing
ai
the statement that killing all humans
has negative utility everything else
being equal it's just not true
because everything else being equal
includes the proposition that
the number of paper clips
is equal whether or not you kill all
humans
and the number of paper clips is all
that matters
in the case of paperclip maximization
and therefore
the statement here about negative
utility is
is just wrong and keeping a promise
being better than not keeping it
everything else being equal that's also
a false statement
given
that
that you have this
monomanic
paperclip maximization
idea
so i want to say before moving on to
this line i want to say a little bit
more about what the
philosophical
foundations that all this hinges on i i
there is a situation where i think that
i mean it could be that mueller and
canon are right here but they need moral
realism to be uh correct
namely uh the existence of an
objectively correct moral theory which
for instance might state that killing
all humans has negative utility
everything else being equal
and we would also need
the
[Music]
so-called
moral cognitivism to be correct namely
the the
possibility of actually finding out what
the objectively correct morality is and
there's a third thing that needs to be
true namely that um
uh
so-called moral uh internalism meaning
that once you realize the objectively
correct morality you would
act on it
i mean this would not happen if the
machine told itself that yes i know that
it's morally wrong to kill all humans
but i just want to i'm not interested in
morality here i'm interested in
producing paper clips
so i will go on and do that
but if all these three things moral um
moral realism uh
moral cognitivism and moral internalism
are true then we can imagine a situation
where
the
machine
actually switches goals because of what
it understands to be the objectively
correct
morality
but we shouldn't expect that we are safe
even in such a situation
because it depends
our fate will then depend on on what the
morally uh
correct
objectively correct moral reality is
and
i mean uh moral philosophers disagree on
what the correct
ethics is
one example that is often defended is
utilitarianism or hedonic utilitarianism
namely uh maximizing the amount of
pleasure minus suffering in the world
so suppose that for instance turbine
temperature is correct he advocates
advocates uh hedonic utilitarianism
uh
if that is the case
then we are probably doomed because uh
the machine will go on to create
as much pleasure as as possible in the
universe and our bodies and brains are
probably very very far from optimal in
terms of
getting as much pleasure as possible per
kilogram of matter and per second
and so on so so so we would probably get
something
uh
similar to the paper clip armageddon but
instead of
producing paper clips the machine
will produce the the um
hypothetical substance
uh called hedonium which is the
substance which does maximize the amount
of pleasure uh produced
uh per kilogram of matter and
per second
so
i don't really see how miller and canon
has as a very good case that that
aix risk is not to worry about so we
there's a
good reason
uh to take this problem seriously
i want to end i see i have just
five minutes left
uh and
note that
robust from theory to those of you who
are mathematicians or work in related
fields and i know that there are quite a
few of you in the audience
omaha bostrom theory may come across as
somewhat vague
not mathematically precise
and and and uh
you would probably more be more
convinced if if i can formulate versions
of all 100 boston theory that that has
more mathematical precision and that
would probably also be a very very good
thing for solving the ai alignment
problem which is i mean computer
programming problems are kind of
mathematical problems so if we can
formulate things mathematically that
will be a good thing
there's this paper
by alex turner and co-authors
from 2019
where they seek to make instrumental
convergence or a case of instrumental
convergence precise in the setting of
so-called mark-up decision processes
here is alex turner and two of his
co-authors rowan shaw and here again is
andrew kritsch
this is the paper optimal policies tend
to seek power
and the formalism they
employ
so-called markup decision processes so
these are like markov chains so this is
a very very simple example we're just a
three-state markov chain and a markov
chain a normal marker chain has just one
uh transition matrix telling us
the probabilities of going
where to go
uh
which state to go to next
uh given that it sits for instance at as
north here it's a little different we
imagine we have an agent who can choose
between in this case two different
transition
matrices we can have more of them
if we want to make things a bit more
complex
but but this agent sitting at the state
can choose between two different
transition matrices and when it's made
of choice the next jump is done
according to the chosen
transition matrix
and
there is some reward function
such that the agent reaps benefits uh
depending on
which states it's in and this is not
just at the next time instance but in
the second to next and so on
and the question is what is a good
strategy for
for the agent to choose transition
matrix and it can make it
a choice at a new choice at each time
point
um
so
formally a markov decision process is a
quadruple like this where s is the state
space typically taken to be finite
a is the set of available actions so
each corresponds to to a
particular transition matrix
these are denoted pa
the transition matrix corresponding to
action a
and r here is the reward function
so rs is the reward of state
this should be lowercase s sorry for
that a typo
um
so the agent starts in a given state
little s at time zero and wishes
uh to maximize uh future reward
and if we just summed up all times until
t equals infinity this would be an
infinite sum
so one typically employs a discount
factor
saying for instance that
well if the discount factor is 0.9 then
what happens at time t equals 1 is worth
the
unity and at time 2 is discounted by 0.9
and time 3 is discounted by 0.9 squared
and so on and this ensures
that this sum
converges
so it's like i mean
discount rates is something that that
most economists favor we should we
should act according
to we should take the actions that
maximizes
expected uh
utility uh
discounted by how far off we go into the
future
so so
and and and this here is for expectation
so so the agent want to
maximize
expected discounted utility summed over
all future time points
and
it turns out
by markov properties and there's this
particular exponential shape of the
discounting optimal policy requires only
choosing one action a little s for each
state s so every time that you come to a
particular state you should choose the
same
action so this simplifies the problem
tremendously and in particular for a
finite state space existence of this
uh limit as gamma goes to one
corresponding to being looking further
and further into the future discounting
less and less
so for each particular gamma this exists
but but but for finite test the limit
exists
even
uh
as gamma goes to one
and this follows just from elementary
markov chain
theory
so what do turner at all do well
they
uh
figure out
uh
the what
the optimal uh policy is that is the
optimal choice of actions at the
different sets and they compute the
expected value of future reward
um
given this optimal policy and that's
denoted v star r
of s and gamma
and they define the power
of a state
and it also depends on this discount
factor gamma as the expected
value of this
future power
uh
am i taking expected value twice here
well by the tower part it doesn't matter
but
perhaps my notation here is
unnecessarily cumbersome the power is
just this this expected
uh
value of future reward when we stand at
this point as
where no i'm sorry
the reason for tackling expected
expectation here is
that we
assume that the reward is chosen
according to maximum entropy
distribution on zero one to the s
meaning that each particular
state
has a random reward between zero and one
and independently for different states
yeah so we needed the expectation here
as well so everything i did in the
previous slide was for fixed r and here
we're average averaging
over all different r
and the main mathematical content of
turner
and co-authors is a series of results
giving conditions under which most
reward functions tend to lead the
policies seeking out states states s
with large power
large ability to be
for
general
reward functions
reap reward
so that's a kind of an instrumental
convergence
result
[Music]
and
this is just the start of the broader
project of putting on the boston theory
and a more rigorous mathematical
foundation
and and and so there's a lot of
possible generalizations and further
work to be done here
i should
just
mention one caveat here regarding the
choice of average averaging r over
uniform distribution on zero one to the
state space
one may note that this
flies somewhat in the face of one of my
own hobby horses from the mid to late
oos
and i'll just show you the first page of
a preprint i have from then called
uniform distribution is a model
assumption we should not just
go ahead and
apply
uniform distribution assumptions to the
world and think that this is not this is
uncontroversial it's not there is there
are many cases where this goes
badly uh wrong i'm not saying this to to
to to say that the ternarital paper is
worthless but but more to suggest that
uh further results uh for more general
distributions of the reward function and
so on would be interesting to get
as well
so
this this was my last slide i want to
say that uh concerning this turner and
co-author's paper
i'm not
i don't think i understand instrumental
convergence much better now that they
have mathematici
mathematized an aspect of it
but i think that the main value of the
paper is that it lays down a framework
where we can
analyze important
problems in
ai alignment
that
[Music]
it's not the results
in themselves that are so informative
but the potential
to work further in the same framework
i see a comment here from david
wood
question so this is the end of the talk
and now
we'll take a few minutes for the
discussion
david do you want to turn on the
microphone and and ask your question out
loud
hello oh thanks so much for uh
fascinating presentation
i i was struck by this idea of
ontological
uh crisis
that an ai might discover something
about the world which
caused it to revise the goals
and then i was thinking about well if it
discovers some fundamental moral
principles is that going to cause it to
revise its goals but you pointed out
quite rightly that it might not do so
because it might not have any obligation
to respect these moral principles that
it found so hence my question
do you actually think it's worth
prioritizing programming in this moral
internalism into all advanced ais
namely that if an ai does find in a
reliable sense a fundamental moral
principle then it will be obliged to
respect that in everything else it does
that's a very good question
and
i don't i need to think
carefully about answering it because if
i answer yes that could be very very
dangerous
because if it discovers that uh hedonic
utilitarianism is the morally
objectively correct moral theory then we
will be doomed
so if we somehow want to
want to
survive even in the face of such a
situation nick bostrom has
suggested such a scenario where where
the machines might discover
uh such a human unfounded unfriendly uh
moral objectively through moral theory
uh and said that yeah okay so then maybe
we're morally obliged than to to to give
up but we could compromise here and
suggest to the machine to set up the
milky way as a kind of reservoir for
uh
for human flourishing and then it could
use all the other 10 to the 11 or 10 to
the 12 galaxies of the observable
universe and create hedonium out of
these so most of things
the things in the universe will go as
good as possible but we'll also have
some human flourishing which would be
good for us
i can't really make up a mind if that
would be wonderful or if it's the
choice to do that and to prevent our
galaxy from turning it into hedonian
itself would be a hideous crime against
the universe or something
these are very very hard questions
uh i think
i think your question is is very
interesting but it
probably does not have an obviously
correct answer so we should we should
think carefully about it
okay i don't answer your question here
no no i i i i i would be astonished if
you had a comprehensive answer to that
yes but you've definitely given me more
to think about thanks
thank you um
does anyone else have a question
if not uh
i wanted to ask you a little bit more
about this multipolar case because that
was interesting that it might be
possible that we could
guarantee
the uh the friendliness or the alignment
human well-being of individual ais but
it's in the combination of the
interactions that something new might
emerge
i thought that's a bit like what
happened in
biological evolution the the various
sets of genes all had their own
preservation
incentives perhaps i don't think it
happens as well in human society we can
imagine in human society where every
single individual is is a very kind uh
and friendly person and nevertheless we
end up in social equilibria where bad
things happen
yes yes
so i think this uh
emergence at a higher level of the
actions which are contrary to the
apparent incentives of individuals does
need to be considered yes we we can't
just solve the alignment level at the
level uh for individual agis
yes
so we'll talk a little bit more about
that uh on on friday and uh i think it's
a it's a growing subfield of the ai
safety
area
anyone else before we close down
|
bd360f3a-ea2a-472d-9576-5bc6a526aa5e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Brussels - Mindfulness and mental habits
Discussion article for the meetup : Brussels - Mindfulness and mental habits
WHEN: 10 January 2015 01:00:00PM (+0100)
WHERE: Rue des Alexiens 55 1000 Bruxelles
LessWrong has been a bit quiet of late, partly because some of its best would-be contributors have focused on their own blogs. (This doesn't bode well for the long-term health of the site, but I'm not quite ready to rename this group SlateStarCodex Brussels.)
Among the most interesting off-site developments is sort of a generalization of the ideas discussed in Ugh Fields and Noticing Confusion: an ongoing conversation on mindfulness and trainable mental habits. Most of it is on Brienne Strohl's Agenty Duck (though there's a good deal of less link-able spillover on Facebook), and brand-new The Mind's UI should be all about it.
It's the month of new year's resolutions, as good a time as any to discuss such concrete self-improvement techniques.
----------------------------------------
We will meet at 1 pm at "La Fleur en papier doré, close to the Brussels Central station. The meeting will be in English to facilitate both French and Dutch speaking members.
If you are coming for the first time, please consider filling out this one minute form to share your contact information.
The Brussels meetup group communicates through a Google Group.
Meetup announcements are also mirrored on meetup.com
Discussion article for the meetup : Brussels - Mindfulness and mental habits
|
8f00230d-3607-4afd-8e35-d5a62aedf4b1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Research sprint] Single-model crosscoder feature ablation and steering
This work was done as a research sprint while interviewing for UK AISI — it’s just a preliminary look into the topic, but I hope it will be useful for anyone else interested in the same ideas. Thanks to Joseph Bloom for helpful comments and suggestions. The sleeper agent and crosscoder setup is borrowed from the work done in my MARS stream, see https://www.alignmentforum.org/posts/hxxramAB82tjtpiQu/replication-crosscoder-based-stage-wise-model-diffing-2 for a detailed overview — particular thanks to Oliver Clive-Griffin for writing the crosscoders library and to Jason Gross and Rajashree Agrawal for guiding the stream.
All the experiments described here can be seen in this colab notebook, though I would recommend reading the post over reading the notebook.
Introduction
When trying to understand a sparse autoencoder feature, your first step is probably to look at where it activates on examples from the dataset. However this can sometimes be misleading, since it just shows what sort of text correlates with the feature’s activation rather than showing the causal effect of the feature on the model’s functioning. Feature ablation and steering are good ways to confirm what a feature is doing: you run the language model while modifying the activations to zero-out or boost the specified feature, and see what sort of text it generates.
Recently I’ve been working with single-model many-layer acausal crosscoders — these are much like SAEs except that rather than being trained on the activations of a single layer of a model, they are trained on the concatenation of the activations of all or many layers. When analysing crosscoder features I wanted to use ablation or steering to confirm my hypotheses about what the features are doing; but I realised that there are many possible approaches to doing ablation or steering in crosscoders, and I wasn’t sure which would give the most useful results. As far as I can tell this question hasn’t been addressed in the literature.[1] In th
|
cf75630b-5fcf-4e75-af37-af68a67b25e1
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Reflections on the PIBBSS Fellowship 2022
*Cross-posted from* [*LessWrong*](https://www.lesswrong.com/posts/gbeyjALdjdoCGayc6/reflections-on-the-pibbss-fellowship-2022) *and the* [*Alignment Forum*](https://www.alignmentforum.org/posts/gbeyjALdjdoCGayc6/reflections-on-the-pibbss-fellowship-2022)
Last summer, we ran the first iteration of the PIBBSS Summer Research Fellowship. In this post, we share some reflections on how the program went.
Note that this post deals mostly with high-level reflections and isn’t maximally comprehensive. It primarily focusses on information we think might be relevant for other people and initiatives in this space. We also do not go into specific research outputs produced by fellows within the scope of this post. Further, there are some details that we may not cover in this post for privacy reasons.
How to navigate this post:
1. If you know what PIBBSS is and want to directly jump to our reflections, go to sections "Overview of main updates" and "Main successes and failures".
2. If you want a bit more context first, check out "About PIBBSS" for a brief description of PIBBSS’s overall mission; and "Some key facts about the fellowship", if you want a quick overview of the program design.
3. The appendix contains a more detailed discussion of the portfolio of research projects hosted by the fellowship program (appendix 1), and a summary of the research retreats (appendix 2).
About PIBBSS
============
[PIBBSS](https://www.pibbss.ai/) (Principles of Intelligent Behavior in Biological and Social Systems) aims to facilitate research studying parallels between intelligent behavior in *natural*and *artificial*systems, and to leverage these insights towards the goal of building safe and aligned AI.
To this purpose, we organized a 3-month Summer Research Fellowship bringing together scholars with graduate-level research experience (or equivalent) from a wide range of relevant disciplines to work on research projects under the mentorship of experienced AI alignment researchers. The disciplines of interest included fields as diverse as the brain sciences; evolutionary biology, systems biology and ecology; statistical mechanics and complex systems studies; economic, legal and political theory; philosophy of science; and more.
This approach broadly -- the PIBBSS bet -- is something we think is a valuable frontier for expanding the scientific and philosophical enquiry on AI risk and the alignment problem. In particular, this aspires to bring in more empirical and conceptual grounding to thinking about advanced AI systems. It can do so by drawing on understanding that different disciplines already possess about intelligent and complex behavior, while also remaining vigilant about the disanalogies that might exist between natural systems and candidate AI designs.
Furthermore, bringing diverse epistemic competencies to bear upon the problem also puts us in a better position to identify neglected challenges and opportunities in alignment research. While we certainly recognize that familiarity with ML research is an important part of being able to make significant progress in the field, we also think that familiarity with a large variety of intelligent systems and models of intelligent behavior constitutes an underserved epistemic resource. It can provide novel research surface area, help assess current research frontiers, de- (and re-)construct the AI risk problem, help conceive of novel alternatives in the design space, etc.
This makes interdisciplinary and transdisciplinary research endeavors valuable, especially given how they otherwise are likely to be neglected due to inferential and disciplinary distances. That said, we are skeptical of *“interdisciplinary for the sake of it”*, but consider it exciting insofar it explores specific research bets or has specific generative motivations for why X is interesting.
For more information on PIBBSS, see [this introduction post](https://www.lesswrong.com/posts/4Tjz4EJ8DozE9z5nQ/introducing-the-principles-of-intelligent-behaviour-in), this [discussion of the epistemic bet](https://www.lesswrong.com/posts/FuToH2KHxKmJLGk2B/ai-alignment-as-navigating-the-space-of-intelligent), our research map (currently undergoing a significant update, to be released soon), and these [notes on our motivations and scope](https://docs.google.com/document/d/1iOQ13na21jEjk35PGVYF-t7MttDDaxd_WGFwL1Q4hM4/edit?usp=sharing).
Some key facts about the fellowship program
===========================================
* 12-week fellowship period (~mid-June to early September)
* 20 fellows, and 14 mentors[[1]](#fn3ff72sklc07) (for a full list, see [our website](http://pibbss.ai/our-people))
* Fellows had relevant backgrounds in the following fields:
+ Complex systems studies, network theory, physics, biophysics (~4)
+ Neuroscience, computational cognitive science (~4)
+ Formal Philosophy, Philosophy of Science (~5)
+ Evolutionary Biology, Genomics, Biology (~3)
+ Chemistry (~1)
+ Computational social science (~1)
+ Economics (~2)
* 6-week reading group (“Deep Read” format[[2]](#fn5dmzt1u3au)) serving as an introduction to AI risk (prior to the start of the fellowship period) (NB we are planning to release an updated and extended version of the same reading programme)
* 2 research retreats (inviting fellows, mentors and a couple of external guests; taking place at a venue outside Oxford, and nearby Prague respectively)
* Individual research support/facilitation (to different degrees for different fellows)
* ~Bi-weekly speaker series throughout the fellowship period
* Further scaffolding in the form of weekly check-ins, optional co-working sessions and occasionally socials,
* Optional stay in Prague to work from a shared office space
* Stipend (3,000 USD/month)
* Financial support for travel and accommodation (e.g. to visit their mentor or the residency)
The original value proposition of the fellowship
------------------------------------------------
This is how we thought of the fellowships’ value propositions prior to running it:
* **Generating a flow of insights** between the fellow/their domain of expertise towards the AI alignment mentor, and the AI alignment community at large; via the research output.
* **Attracting and actuating talent** with the competencies and motivation to conduct valuable AI alignment or AI governance research (hits-based).
* **Building bridges**between AI alignment and other fields with relevant knowledge and talent, thereby lowering current barriers to fruitful exchanges.
* **Gaining value of information**about the value/tractability/etc. of (facilitating) PIBBSS-style research, i.e. careful, value-sensitive, and epistemically diverse approaches toward the design and implementation of aligned and beneficial artificial intelligent systems drawing on the study of currently existing systems implementing complex or intelligent behavior.
Overview of main updates
========================
* We continue to be excited and have become more confident in the **value and tractability of PIBBSS-style research for alignment.**
* We continue to be excited about the value of **building bridges**between the alignment community and other relevant epistemic communities which offer a so-far untouched pool of insights and talent.
* We made positive updates about the **tractability of attracting strong candidates of PIBBSS-relevant domains**, *with the caveat*of finding important room for improvement for outreach to conceptual social sciences
* We made positive updates about our **ability to adequately introduce them to AI risk and pedagogy**, *with the caveat*of finding important room for improvement concerning technical ML-safety pedagogy
* We gathered preliminary positive evidence about our **ability to positively affect trajectory changes.**Prior to running the fellowship, a relevant source of skepticism was that people from fields as diverse as the PIBBSS portfolio might be significantly less likely to commit to working on AI alignment in the long term. We haven’t seen this skepticism confirmed, but acknowledge it is largely too early to tell.
+ Furthermore, we have been reinforced in our belief that, even if trajectory changes are considered a significant source of value, focusing on research output is a healthy way of causing trajectory change as a side-effect.
* We gained better models about the ways **conflicting incentive landscapes**might curtail value. (For more detail, see discussion under failures and/or shortcomings.) In particular, we will pay more attention to this consideration in the application process, and will provide more structural support encouraging fellows to produce more and more frequent communicable output to capture the research/epistemic progress that has been generated.
* We suspect a bunch of value might come through **hit-based outcomes.**We gathered light evidence that there might be significant expected value coming from more “hits-based” avenues, e.g. attracting senior or highly promising scholars and counterfactually contributing to them working on AI alignment and related topics. We think there is value in finding ways to find such cases more systematically, and are exploring ways to do so.
* Fellows with little **research experience**tended to have a harder time during the fellowship and would have benefited from more structure. We didn’t make a large update here, and still think it is sometimes worth accepting even nominally junior individuals to the fellowship. That said, going forward, we are less likely to accept applicants with less than graduate-level (or equivalent) research experience. We have started thinking about and are interested in exploring alternative ways of onboarding promising people with less research experience.
Main successes and failures
===========================
Successes
---------
Overall, we believe that most of the value of the program came from researchers gaining a better understanding of research directions and the AI risk problems. Some of this value manifests as concrete research outputs, some of it as effects on fellows’ future trajectories.
* **Research progress:**Out of 20 fellows, we find that at least 6–10 made interesting progress on promising research programs.
+ A non-comprehensive sample of research progress we were particularly excited about includes work on intrinsic reward-shaping in brains, a dynamical systems perspective on goal-oriented behavior and relativistic agency, or an investigation into how robust humans are to being corrupted or mind-hacked by future AI systems.
* **Trajectory changes:**Out of 20 fellows, we believe that at least 11–15 gained significant exposure to AI risk and alignment, and we expect that some of them will continue to engage with the field and make meaningful contributions in the future.
+ We believe the fellowship had a significant *counterfactual impact* on 12 fellows, due to many of them having limited or non-existent exposure to AI risk/alignment prior to the fellowship. In some cases, fellows have changed their research directions/careers, some have taken up jobs and some have become active in AI alignment fluid-building-related activities.
* **Mentors**report finding the fellowship to be a good counterfactual use of their time. Of 10 mentors who filled in the survey, 8 reported it was an equally good or better use of their time, out of which 6 reported it was a strictly better use of their time.
+ The main sources of value reported by mentors were: (i) Developing their thinking on a given research direction in AI alignment, benefiting from new perspectives or concrete insights from the fellow; (ii) Being part of a research network relevant to their own research interests/directions; finding actual/potential future collaborators; (iii) Concrete research progress/output produced during the program together with a fellow.
+ Mentors, as well as a handful of “external” people (i.e. people who were neither mentors, fellows, nor organizers; who interfaced with different parts of the program, e.g. retreat, research facilitation), reported positive updates about the value and tractability of the research bet (i.e. “PIBBSS-style research”).
* **Secondary sources of value;**[[3]](#fnd82uezixxz)we believe the program generated secondary sources of value in the following domains:
+ **Progress towards developing a research network**
- Running the PIBBSS fellowship has provided us with information about the value of, and improved our ability to act as a sort of Schelling point for PIBBSS-style research. Research retreats in particular, and speaker events and the slack space to a lesser extent, have served to bring together scholars (beyond fellows) interested in AI alignment research from PIBBSS-style angles. We continue to be excited about the potential of supporting and developing this research network and believe we are more likely to notice valuable opportunities.
+ **Building bridges between AI alignment and other relevant epistemic communities**
- We believe that epistemic communities outside of the AI safety/alignment community are working on interesting questions with direct relevance to progress in AI alignment. We have started exploring ways to build “bridges” between these communities, with the goal of engaging both insights and talent.
- For example, we are trialing a speaker series as a way of testing the possibility of broader epistemic and social bridges with specific relevant research directions. The series invites researchers from inside and outside of AI alignment who we believe produce interesting work related to PIBBSS priorities. (If this goes well, we might make the speaker series openly accessible to the AI alignment and other adjacent communities.) Further, we have built bridges in the form of personal networks and fellows themselves sometimes represent such interdisciplinary bridges. Some of our fellows have taken up projects aiming to do such work, e.g. by organizing research workshops or writing introductory materials.
+ **Developing know-how about Program Design**
- **Introducing people to AI risk and Alignment Research.**
* Running the fellowship program allowed us to make concrete progress in developing pedagogical-content knowledge for AI risk and alignment. Concretely, we ran a 6-week (pre-fellowship) reading group with the deep read format dedicated to an introduction to *key phenomena in AI risk*, two research retreats (more details below in "Appendix 2") and provided research facilitation throughout the program.
* We did better than expected at introducing people to AI alignment who have no or limited prior familiarity with the field[[4]](#fn2tedsigkyw) and who came in with a decent amount of inferential distance from AI risk (e.g. due to being part of previously underrepresented epistemic demographics, e.g. the natural or social sciences). This leads us to make slight positive updates about the accessibility of AI risk to researchers from such domains and makes us think that there is potential for value in improving “AI-risk pedagogy” (for future PIBBSS activities and for the AI-alignment community in general).
+ However, we also encountered challenges when it comes to facilitating knowledge transfer towards prosaic alignment approaches (see "Failures and/or shortcomings" for more detail).
- **Epistemology, Philosophy of Science and Meta-theory of Alignment**
* We found that running a diverse research programme is highly synergistic with also making progress on philosophy of science in AI, epistemology of Alignment research, and to some extent AI Strategy. (Note, we believe this to be similar to some parts of [Refine’s model](https://www.lesswrong.com/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind#The_Long_View__Refine_and_Conjecture), even though the programs were structurally different).
* Examples of frontiers of intellectual progress include: (i) **Understanding of AI risk in terms of intelligent behavior and complex phenomena**(e.g. shining light on emerging parallels in agent-foundations research and complex systems theory) ; (ii) **Philosophical reflections on AI risk and alignment**, e.g., non-anthropomorphic articulations of AI risk, problematizing and clarifying the concepts of consequentialism, and goal-orientedness in AI risk stories; (iii) **Epistemology and philosophy of science perspectives on AI risk and alignment**, e.g., epistemological vigilance with respect to interdisciplinary perspectives on AI risk and alignment, mapping disagreements on risk models and with respect to analogies and disanalogies relevant to AI, improving AI-risk pedagogy, and cross-cohort conversations during the fellowship; (iv) **Translation between technical ML safety and other fields**, e.g., evolutionary biology and selection theorems.
+ **Learning about outreach and selection of talent from natural and social sciences**
- Prior to running the fellowship, one key uncertainty concerned our ability to attract strong candidates from natural and social science backgrounds.[[5]](#fn3l8fqnczhyw) Overall, we made positive updates about our ability to do so and think this increases the expected value of future initiatives targeted at people from a similar target audience.
- In terms of disciplines, judging from the applicant pool, we were particularly good at attracting people from philosophy, physics/complex systems, and brain sciences; we did okay but not great in terms of attracting people from biology and computational social sciences; and we underperformed at attracting people from the theoretical social sciences (e.g. legal and political theory, institutional economics and political economy, analytical sociology)
+ **Miscellaneous other notable outcomes:**
- The fellowship produced 3-5 long-term collaborations among fellows and mentors.
- One fellow is organizing an AI-risk workshop at the ALife 2023 conference. We are excited about the potential knowledge and talent overlap between the ALife and the AI alignment community.
- Three fellows are writing an introduction to AI risk targeted at natural systems researchers, in particular, physics and biology researchers.
- Some concrete artifacts created in the context of the fellowship (this does *NOT*include outputs by fellows):
* A [Speaker Series](https://m.youtube.com/channel/UCMo5Ei9xLHbk9sqNMSH3mdQ) inviting people from the alignment community (e.g., Alex Turner on Shard Theory) as well as outside (e.g., Simon DeDeo on the theory of explanations, Michael Levin on hierarchical agency in biological systems), which we think was a useful exercise in testing the possibility of broader epistemic and social bridges with specific relevant research directions.
* A PIBBSS Research map (currently undergoing a significant update) providing an overview of six clusters of research directions within the PIBBSS research bet, distilled from conversations TJ and Nora had with mentors regarding their key interdisciplinary interests:
+ Understanding and Interpreting Minds and Cognition
+ Safe Interfaces to Human Preferences
+ Processes of Knowledge Production and Reflection
+ Social and Economic Coordination
+ Evolutionary Selection for X
+ Information and Goals in Autonomous Behavior
* Two talks on the landscape of epistemic strategies in AI alignment, and specifically the epistemic assumptions underlying the “PIBBSS” epistemic bet (once at EAGx Prague, once at [HAAISS](https://www.youtube.com/watch?v=h86oHBbQwI8&ab_channel=PIBBSSFellowship))
* Some initial posts in a [series on the philosophy of science of AI alignment](https://www.lesswrong.com/s/4WiyAJ2Y7Fuyz8RtM) (those ideas are ~directly downstream from working on PIBBSS)
* A (WIP) reading list providing an introduction to the broad contours of the AI-risk problem suitable for an interdisciplinary cohort (a side-product of running the reading group).
* A curated [list of PIBBSS-style books](https://spurious-wallflower-098.notion.site/PIBBSS-Library-880d40e68a3e47968e4eb375df1a28af)
Failures and/or shortcomings
----------------------------
* **Concrete output:**While we are fairly happy with the research progress, we think this insufficiently translated into communicable research output within the timeframe of the fellowship. Accordingly, we believe that a, if not “the”, main dimension of improvement for the fellowship lies in providing more structural support encouraging more and faster communicable output to capture the research/epistemic progress that has been generated.
* **Limitations of our ability to facilitate knowledge transfer towards prosaic alignment approach:**According to our judgment, transfer towards prosaic alignment approaches (e.g., drawing on insights from evolutionary biology towards [training stories](https://www.lesswrong.com/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine)) most requires developing novel conceptual vocabulary, and would have benefited from fellows having more familiarity with concepts in technical ML, which we were insufficiently able to provide through our pedagogical efforts.
+ We still believe, also thanks to preliminary-yet-promising evidence of research progress as judged by us and mentors, that this type of epistemic progress, if challenging, is still valuable to pursue. Going forward, we will try to tackle these challenges by a) improving the prosaic-relevant pedagogy leading up to and during the fellowship, b) improving our research facilitation capacity, in particular, for prosaic projects, and c) putting more weight on fellows working on prosaic projects to have prior exposure to ML/prosaic AI alignment.
* **Conflicting incentive landscapes can curtail value:** fellows being embedded in conflicting incentive landscapes (e.g. academic and or professional) has the potential to limit or curtail value. This can manifest as a reduction in fellows’ time commitments to the program/their project, or (implicitly or overtly) influence research-related choices, e.g. the project topic, the scoping or the type of output that is aimed at. Among others, we suspect that academic incentives contributed to a tendency across the cohort to sometimes aim for paper-like outputs over, e.g., forum posts, which in turn leads to a reduced volume of communicable research output within the period of the fellowship (see the point about concrete output above). This can cause delays in ideas reaching and being discussed/evaluated/criticized/improved upon by the larger epistemic community. Going forward, we will be more cognizant of the relevant incentive landscapes, and, in particular, add structures to the program design encouraging fellows to release (even if intermediary) outputs directly as well.
* **Outreach to theoretical social sciences**: while we were happy about outreach to natural sciences in general, we underperformed at attracting people from the conceptual social sciences (e.g. legal and political theory, institutional and political economy, analytical sociology). We think this represents a missed opportunity.
A brief note on future plans
============================
Overall, we believe the PIBBSS summer research fellowship (or a close variation of it) is worth running again. We applied for funding to do so.
The key dimensions of improvement we are envisioning for the 2023 fellowship are:
* More structural support, encouraging more and faster communicable output capturing the research/epistemic progress that has been generated,
* Improvement of pedagogy (and facilitation) relevant to prosaic projects,
* Being more careful about accepting fellows who are embedded in conflicting incentive landscapes;
More tentatively, we might explore ways of running (part of the program) in a (more) mentor-less fashion. While we think this is hard to do well, we also think this is attractive for several reasons, mainly because mentorship is scarce in the field. Some potential avenues of exploration include:
* Reducing mentor bandwidth demand towards more efficient feedback mechanisms and substituting with:
+ Research facilitation,
+ Peer-interaction,
+ Pairing with junior researchers in AI.
* Exploring alternative programs aimed at audiences of different seniorities, aided by better-scoped projects/research program
Beyond the format of the summer research fellowship, we tentatively think the following (rough) formats are worth further thought. Note that we are not saying these programs are, all things considered, worthwhile, but that, given our experience, these are three directions that may be worth exploring further.
1. A **reading group/introduction course to AI risk/alignment**suitable for natural and social science demographics. We are considering further developing the pre-fellowship reading group, and experimenting with whether it might be worth running it (also) outside of the fellowship context.
2. An **~affiliate program,** targeted at people who are already able to pursue (PIBBSS-style) research independently. Such a program would likely be longer (e.g. 6 months or more) and focus on providing more tailored support towards affiliates developing their own (novel) PIBBSS-style research directions.
3. A range of **research workshops/retreats/conferences** aimed at specific domains or domain interfaces (within the scope of PIBBSS research interests), aiming to e.g. test or develop specific research bets (e.g., complex systems in interpretability, ALife in agent foundations, predictive processing) and/or create Schelling points for specific demographics (e.g., brain sciences in AI alignment).
PIBBSS is interested in exploring these, or other, avenues further. **If you have feedback or ideas, or are interested in collaborating, feel encouraged to reach out to us ([email protected]).**
We want to thank…
=================
For invaluable help in making the program a success, we want to thank our fellow organizing team members *Anna Gadjdova*and *Cara Selvarajah;*and several other people who contributed to the different parts of this endeavor, including *Amrit Sidhu-Brar, Gavin Leech, Adam Shimi, Sahil Kulshrestha, Nandi Schoots, Tan Zhi Xuan, Tomáš Gavenčiak, Jan Kulveit, Mihaly Barasz, Max Daniel, Owen Cotton-Barrat, Patrick Butlin, John Wentworth, Andrew Critch, Vojta Kovarik, Lewis Hamilton, Rose Hadshar, Steve Byrnes, Damon Sasi, Raymond Douglas, Radim Lacina, Jan Pieter Snoeji, Cullen O’Keefe, Guillaume Corlouer, Elizabeth Garrett, Kristie Barnett, František Drahota, Antonín Kanát, Karin Neumannova, Jiri Nadvornik*, and anyone else we might have forgotten to mention here - our sincere apologies!). Of course, we are also most grateful to all our mentors and fellows.
Appendix 1: Reflections on Portfolio of Research Bets
=====================================================
In this section, we will discuss our reflections on the portfolio of research bets that fellows worked on, which are distributed across a range of “target domains”, in particular:
1. Agent Foundations,
2. Alignment of Complex Systems,
3. Digital Minds and Brain-inspired Alignment,
4. Prosaic Alignment (Foundations),
5. Socio-technical ML Ethics,
6. Experimental and Applied Prosaic Alignment.
We will focus the discussion on aspects like the theory of impact for different target domains and the tractability of insight transfer. The discussion will aim to abstract away from fellow- or project-specific factors. Note, that we shall skip the discussion of specific projects or other details here in the public post.
**TL;DR:** At a high level, projects aimed towards i) Agent Foundations, ii) Alignment of Complex Systems, and iii) Digital Minds and Brain-inspired Alignment most consistently made valuable progress. Projects aimed at iv) Prosaic Alignment faced the largest challenges. Specifically, they seem to require building new vocabulary and frameworks to assist in the epistemic transfer and would have benefited from fellows having more familiarity with concepts in technical ML, which we were insufficiently able to provide through our pedagogical efforts. We believe this constitutes an important dimension of improvement.

**1. Agent Foundations**(25–30%) ***[AF]***
* PIBBSS had ~5 fellows (and 4 projects) working on Agent Foundations, i.e., interdisciplinary frontiers for clarifying concepts in the theory of agency, optimization and cognition. These problems aid to contribute towards understanding and characterizing forms of agency to steer towards (e.g., corrigibility) or away from (e.g., power seeking).
**2. Alignment of Complex Systems**(20–25%) ***[CS]***
* PIBBSS had ~4 fellows (and 3 resulting projects) working on Alignment of Complex Systems, aimed at clarifying and/or formalizing relevant phenomena found within complex adaptive systems of various types and scales. The importance of these topics comes from a better theoretical understanding of learned adaptive behaviors in AI systems, target behaviors of aligned systems, and AI governance/macrostrategy.
**3. Digital Minds**(and Brain-inspired alignment research; 5–10%)***[DM]***
* We had ~2 fellows (and 2 projects) working at the intersection of neuroscience (and/or cognitive science) and technical questions on the nature of minds and cognition, especially as they pertain to digital minds or brain-inspired AI systems. These topics seek to understand phenomena relevant to the functioning of sophisticated forms of cognition or to questions concerning the moral status of digital minds, as well as to understand the structure and behavior of mind-like algorithmic systems.
***Discussion of AF, CS and DM:***
The above three target domains (i.e. Agent Foundations, Alignment of Complex Systems, and Digital Minds) are all, in some sense, similar insofar as they are all basically pursuing conceptual foundations of intelligent systems, even if the three approach the problem from slightly different starting positions, and with different methodologies. The three bundles together accounted for about 50-55% of projects, and roughly 50% of them were successful in terms of generating research momentum. This makes it meaningful to pay attention to two other similarities between them: a) the overlapping vocabularies and interests with respective neighboring disciplines, and b) the degree of separation (or indirectness) in their theory of impact.
The object-level interests in AF, CS, or DM mostly have the same type signature as questions that motivate researchers in the respective scientific and philosophical disciplines (such as decision theory, information theory, complex systems, cognitive science, etc.). This also means that interdisciplinary dialogue can be conducted relatively more smoothly, due to shared conceptual vocabulary and ontologies. Consequently, we can interpret the motivational nudges provided by PIBBSS here as being some gentle selection pressure towards alignment-relevance of which specific questions get investigated.
At the same time, the (alignment-relevant) impact from research progress here is mostly indirect, coming from better foresight of AI behavior and as an input to future specification and/or interpretability research (see discussion in Rice and Manheim 2022[[6]](#fng1nkv2y7mwr)). This provides an important high-level constraint on value derived here.
**4. Prosaic Alignment Foundations** (25–35%) ***[PF]***
* We had ~5 fellows (and 5 projects) working on foundational topics related to work on prosaic alignment (or alignment of advanced ML systems), including work related to value learning, interpretability, ML training, human assistance, scaling laws and capabilities gains, and machine deception. As alignment research on advanced ML systems is most directly proximate to developing practical alignment strategies/proposals (as a key epistemic artifact[[7]](#fntxw6kblvscj)), these topics constitute what is very likely a critically important area. Projects in the fellowship aimed at bringing in insights from cognitive science, evolutionary biology, statistical mechanics, etc. towards these problems.
***Discussion of PF:***
While some PF projects did succeed in finding promising research momentum, there was higher variance in the tractability of progress. This bundle also had a meaningfully lower ex-post excitement by mentors (compared to the rest of the portfolio), and caused us significant updates about the epistemic hardness of transferring insights from other disciplines.
Unlike AF+CS+DM discussed above, the interdisciplinary transfer of insights towards prosaic alignment seems to involve building entirely new vocabularies and ontologies to a significantly higher degree. For example, transferring insights from evolutionary theory towards understanding any particular phenomenon of relevance in deep learning, seems to be bottlenecked by the absence of a much richer understanding of isomorphism than what already exists. In fact, the projects in this bundle that did succeed in finding good research momentum were strongly correlated with prior ML familiarity of the fellows.
However, given the potential of high-value insights coming from bets like this, we think further exploring the ways of building relevant ML familiarity for the fellows, such that they can more efficiently and constructively contribute, seems worth investigating further. At the very least, we intend to add pedagogical elements for familiarizing fellows with Machine Learning Algorithms and with ML Interpretability, in addition to improving the pedagogical elements on Information Theory and RL Theory from the previous iteration.
**5. Socio-technical ML Ethics** (10%) ***[ME]***
* We had 2 fellows (and 2 projects) working on topics on contemporary ethical concerns with ML systems related to topics on epistemic and socio-technical effects of AI. Projects looked at topics such as social epistemology and epistemic risks, normative concepts and trade-offs in ML research, relationship of epistemic content of ML models vis-a-vis scientific discoveries.
* While topics in ML ethics can generally be seen as non-neglected within the larger ML community, we think that facilitating philosophically well-grounded projects on these topics can aid in raising the profile of understudied normative considerations about current and future AI systems. We believe facilitating such work contributes to fostering a strong epistemic environment (from a diversity of opinions on AI risk, especially insofar as they come from well-grounded positions, including disagreements), as well as opening up surface area for positive-sum synergies across a diverse study of normative demands on AI systems and the steering of AI systems towards good-to-have properties.
**6. Experimental and Applied Prosaic Alignment** (5–10%) ***[EP]***
* We hosted 2 fellows (and 2 projects) working on experimental and applied projects with relevance to topics in prosaic alignment research. These projects involved some ML research and experimentation/engineering that benefited from an interdisciplinary perspective and flow of insights.
* This kind of research is valuable in similar ways to conceptual progress on prosaic alignment (ie. by building and testing new frameworks) and helps in connecting the practical understanding of ML research with conceptual work more directly. We also recognize that such work is non-neglected in the larger community and can sometimes be susceptible towards safety-washing of capabilities-advancing research, however in our judgment the fellows were adequately cognizant of this.
Appendix 2: Retreats Summary
============================
We organized two retreats during the fellowship program for familiarization to AI risk and the alignment problem, facilitation of cross-cohort dialogue, and other benefits of in-person research gatherings. Both the retreats had a mix of structured and unstructured parts, where the structured parts included talks, invited speakers, etc., as well as sessions directed at research planning and orientation, while the unstructured parts included discussions and breakout sessions. A small sample of recurring themes in the unstructured parts included deconfusing and conceptualizing consequentialist cognition, mechanizing goal-orientedness, role of representations in cognition, distinguishing assistive behavior from manipulative behavior, etc.
The first retreat was organized at a venue outside Oxford, at the beginning of the summer, and included sessions on different topics in:
1. Introduction to AI Risk (eg. intro to instrumental convergence, systems theoretic introduction to misalignment problems, security mindset, pluralism in risk models, talk on [tech company singularities](https://www.lesswrong.com/posts/ezGYBHTxiRgmMRpWK/tech-company-singularities-and-steering-them-to-reduce-x) etc.)
2. Epistemology of Alignment Research (eg. proxy awareness in reasoning about future systems, epistemic strategies in alignment, [epistemological vigilance](https://www.lesswrong.com/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment), recognizing analogies vs disanalogies, etc.)
3. Mathematical Methods in Machine Intelligence (eg. intro lecture on information theory, intro lecture on RL theory, talk on [telephone theorem](https://www.lesswrong.com/posts/jJf4FrfiQdDGg7uco/the-telephone-theorem-information-at-a-distance-is-mediated) and natural abstractions hypothesis, etc.)
4. Overview of Research Frontiers and AI Design Space (eg. session on reasoning across varying levels of goal-orientedness, capabilities and alignment in different AI systems, and PIBBSS interdisciplinary research map.)
The second retreat was organized near Prague a few weeks before the formal end of the fellowship, and was scheduled adjacent to the [Human-Aligned AI Summer School (HAAISS) 2022](https://humanaligned.ai/index-2022.html). It included fellows presenting research updates and seeking feedback, some talks continuing the themes from the previous retreat (eg. why alignment problems contain some hard parts, problematizing consequentialist cognition and second-person ethics, etc), and practising [double crux](https://www.lesswrong.com/tag/double-crux) on scientific disagreements (such as whether there are qualitative differences in the role of representations in human and cellular cognition).
1. **[^](#fnref3ff72sklc07)**Close to the end of the fellowship program, Peter Eckerlsey, one of our mentors - as well as a mentor and admired friend of people involved in PIBBSS - [passed away](https://forum.effectivealtruism.org/posts/ivep4R7LoSLhWwHGX/peter-eckersley-1979-2022). We mourn this loss and are grateful for his participation in our program.
2. **[^](#fnref5dmzt1u3au)**Here is [an explanation of how deep reading groups work](https://threadreaderapp.com/thread/1285415881120481280.html). We were very happy with how the format suited our purposes. Kudos to Sahil Kulshrestha for suggesting and facilitating the format!
3. **[^](#fnrefd82uezixxz)**By “primary” sources of value, we mean those values that ~directly manifest in the world. By “secondary” values we mean things that are valuable in that they aid in generating (more) primary value (in the future). We can also think of secondary values as “commons” produced by the program.
4. **[^](#fnref2tedsigkyw)**6 out of 20 fellows had negligible prior exposure to AI risk and alignment; 10 out of 20 had prior awareness but lack of exposure to AI-risk technical discussions; 4 out of 20 had prior technical exposure to AI risk.
5. **[^](#fnref3l8fqnczhyw)**Some numbers about our application process:
- Stage 1: 121 applied,
- Stage 2: ~60 were invited for work tasks,
- Stage 3: ~40 were invited for interviews,
- Final number of offers accepted: 20
6. **[^](#fnrefg1nkv2y7mwr)**Issa Rice and David Manheim (2022), *Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety*, <https://arxiv.org/abs/2201.02950>
7. **[^](#fnreftxw6kblvscj)**Nora Ammann (2022), *Epistemic Artifacts of (conceptual) AI alignment research*, [https://www.alignmentforum.org/s/4WiyAJ2Y7Fuyz8RtM/p/CewHdaAjEvG3bpc6C](https://www.lesswrong.com/s/4WiyAJ2Y7Fuyz8RtM/p/CewHdaAjEvG3bpc6C)
The write up proposes an identification of “four categories of epistemic artifacts we may hope to retrieve from conceptual AI alignment research: a) conceptual de-confusion, b) identifying and specifying risk scenarios, c) characterizing target behavior, and d) formalizing alignment strategies/proposals.”
|
c939ebf6-89a1-41c9-9c6e-0f4a96347bd6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Making alignment a law of the universe
[Crossposted from my substack Working Through AI. I'm pretty new to writing about AI safety, so if you have any feedback I would appreciate it if you would leave a comment. If you'd rather do so anonymously, I have a feedback form.]
----------------------------------------
TLDR: When something helps us achieve our goals, but is not an end in itself, we can say it is instrumentally valuable. This property is determined by the environment, deriving directly from its structure. For example, knowledge of physical constraints like gravity is widely useful. For an AI, alignment will not generally be instrumentally valuable. This means it will not be self-correcting, making misalignment the default case. We can change this by modifying the environment an AI experiences — by altering the laws of its universe so that desirable capabilities, like being nice to humans, are useful, and undesirable ones, like scheming, are not. For an LLM, this mainly looks like changing the dominant patterns in its training data. In this post, I explore these ideas, and end by proposing some pre-training data curation experiments. I suggest our long-term goal should be to find a kind of basis transformation after which alignment naturally emerges.
----------------------------------------
When something is a means to an end, rather than an end in itself, we can say it is instrumentally valuable. Money is a great example. We want money either because we can use it to buy other things — things that are intrinsically valuable, like food or housing — or because having lots of it confers social status. The physical facts of having cash in your wallet or numbers in your bank account are not ends in themselves.
You can extend this idea to skills and knowledge. Memorising bus times or being able to type really fast are not intrinsically valuable, but act as enablers unlocking value elsewhere. If you go down the tech tree to more fundamental skills, like fine motor control, reading social cues, or b
|
3e4779a6-b3f7-49eb-9ee0-01f93aa59f1d
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Using AI/ML to gain situational understanding from passive network observations
I Introduction
---------------
Almost all buildings in any government, military or commercial enterprise today operate using a network which communicates using the Internet Protocol [[8](#bib.bib1 "Internet protocol specification")]. There is significant information available in the network packets that are travelling back and forth between the occupants of the building, and to the different machines outside the building.
This network traffic has been mined for information, but the primary applications for which it has been used is only to be found in network security [[11](#bib.bib2 "A survey of network flow applications")] for use-cases such as intrusion detection and intrusion prevention. The other primary use-case for analyzing network traffic has been in creating network traffic analysis models, which can find applications in network planning and deployment.
However, network packet inspection has many uses which go beyond the scope of network security analysis network planning. The analysis of network packets can provide useful ’situational awareness’ of what may be happening within the network, identifying people and devices that are present in the environment, vulnerabilities in the network infrastructure, behavior of devices, and several other uses. Obtaining much of this insight from the network packets requires combining pre-existing knowledge with the content being carried within the network, using a mixture of machine learning algorithms along with some domain knowledge of networks, and a flexible infrastructure that can support a variety of use-cases.
In this paper, we discuss these broad set of use-cases which can be supported using network packet analysis, and describe a system which we have built to implement these use-cases. A key aspect of successful determination of the situation in these cases requires a a combination of network domain knowledge and an application of AI technologies. We will discuss both aspects in this paper.
Ii Use-Cases
-------------
The typical deployment scenario we have is shown in Figure [1](#S2.F1 "Fig. 1 ‣ II Use-Cases ‣ Using AI/ML to gain situational understanding from passive network observations 1D. Verma is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA dverma at us.ibm.com 2S. Calo is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA scalo at us.ibm.com"). The network packet collection system is installed in a building (or other premises) which is used to monitor packets just before they exit the firewall. The packet monitoring system can also collect information from other points in the building, if needed. The system may further incorporate data available within the building or outside it to provide auxiliary information, such as any information about registration of devices with users, or information available from some global location.

Fig. 1: The environment assumed for Cyber Physical Systems. Cyber phyiscal systems are connected and made accessible over the Internet. The most common usage will be a manager or legitimate user issuing control commands to the IoT device.
Because the observation point is within the firewall, we assume that the devices in the network can be identified by their IP addresses. The IP address assignment of any device could change, but the change can usually be detected if the system also has information from other sources such as the DHCP servers [[5](#bib.bib3 "Automated configuration of tcp/ip with dhcp")] deployed within the environment. Depending on the deployed environment and the location and number of observation points, there may be alternate ways of identifying the device from its IP address.
Packet inspection tools such as Zeek [[22](#bib.bib4 "The zeek network security monitor")], snort [[17](#bib.bib7 "Snort: lightweight intrusion detection for networks.")] or wireshark [[12](#bib.bib8 "Wireshark & ethereal network protocol analyzer toolkit")] can provide an initial examination of the contents of the packets, but their functions can be augmented with AI and machine learning capabilities. With this augmentation the observation of network packets can lead to the following set of information.
Discovery of devices: The number, type, and attributes of devices that are connected to the network and communicating actively on it can be detected, identified and characterized. These include devices which do not have a management agent installed on them, and do not respond to queries initiated by network management protocols such as SNMP, or system management protocols such as [[19](#bib.bib9 "SNMP, snmpv2, snmpv3, and rmon 1 and 2")] WEBM [[20](#bib.bib10 "Web-based enterprise management architecture")]. The only devices that can not be identified by this mechanism are those that never communicate on the network.
The discovery of devices is important for a variety of purposes. For one, it lets the administrators identify any unauthorized devices that are present in the environment. Also, in the case of network audits, it provides a report of all the devices present. Some enterprises conduct an audit of all devices on a periodic basis, and the discovery of devices is an important component of that audit. In some cases, the number of devices present in an environment of a particular type may be required to check on the number of licenses needed for their use. In other cases, when the operation and management of an environment needs to be turned over to an outsourcing company, the discovery can provide a better estimate of the effort and cost required for the outsourcing.
Detecting Policy Violations: When devices are identified, they can be identified along with their attributes such as their manufacturer, model, operating system, firmware version, etc. The protocols used by the device to communicate on the network can also be identified. Inspection of network packets can be used to validate that devices are not violating any policies that are specified on the use of the network. Typical policies may include the requirement that all communicating devices be registered in a database, or that all devices use secure communication. Other types of policies may prevent accessing some class of websites, or sites within some specific geography. These violations of policies can be identified by means of packet observation, and checking them against the list of registered devices, or checking the communication protocols they are using. If a class of devices is not allowed on the network, e.g., an enterprise may want to disallow recording devices like Alexa within its buildings, any policy violation can be detected if such a device is observed communicating on the network.
Discovering network topology: Network topology at the IP layer (Layer 3) of connectivity can usually be determined fairly easily, e.g. by means of probing network devices using snmp [[15](#bib.bib11 "Ip network topology discovery using snmp")]. However, active probing by a system can generate loads on the network, interfere with the operations of the network, and may not work in some environments where multiple tiers of packet filtering firewalls are installed. However, discovering the physical (Layer 2) network topology, i.e., identifying things using their physical MAC addresses and how they are interconnected is much more difficult. Passive network management schemes which can capture and analyze MAC addresses and header fields can be used to construct physical (Layer 2) network topology.
Even in the construction of network topology at the IP layer (Layer 3), passive network measurements can be useful. While such approaches have been used to understand and construct the topology of the Internet [[7](#bib.bib12 "Network discovery from passive measurements"), [4](#bib.bib13 "Internet topology discovery: a survey")], they have not been typically used to understand the structure of an enterprise network. However, combining the topology inferred from observation of network traces at different points within the network can help in constructing the topology of an enterprise network. Such passive network discovery tools can be useful for understanding the connectivity between different devices, routers and end-points within an enterprise.
Understanding network resiliency: One of the advantages of understanding network topology is that it can be analyzed to understand components that may be vulnerable, or those whose performance degradation can cause a significant impact on the performance of other systems. In order to understand and identify such components, we need to determine not just the topology, but also understand the traffic characteristics among the different servers and machines within the network. Additionally the inter-dependencies among the protocols used in the traffic needs to be identified as well.
One of the challenges in understanding network resiliency is the task of identifying hidden components. A hidden component is an element in the infrastructure which other elements may be dependent on, but which may be overlooked in the task of system and network planning and upgrade. As an example, upgrading servers supporting an application while ignoring to update the capacity of the systems involved in backup of data for that application, can cause performance degradation that may go undetected. Similarly, installing a spam checker for email which performs inverse domain name lookup to validate email sources can slow down overall application performance if corresponding performance upgrades to domain name services are not made. Such hidden components, which can impact network resiliency or network performance, can be identified by passive network observation.
Understanding Device Behavior: Many modern devices can operate in different modes. As an example, tablets may be used for surfing the web by users in an interactive manner, and the same type of tablet may be embedded as a controller within a network printer, or be used as a sensor capturing video or sounds in some space. Likewise, smart televisions used in conference rooms can serve as display devices for presentations or streaming video, or be used as browsers to display information from the internet. An examination of the network packets, and identifying the network communication behavior of the devices can help identify which behavior is being exhibited by a device in any period of time. This examination of behavior helps both identify the type of device and provide greater insights into the role of different devices within the enterprise.
Understanding Presence: In many buildings, it is useful to keep track of how they are being used, how many occupants are present in the building at any given time, and how the occupancy of the building changes over time. Various approaches to estimate occupancy using visual and vibration sensors [[18](#bib.bib14 "A robust occupancy detection and tracking algorithm for the automatic monitoring and commissioning of a building"), [14](#bib.bib15 "Boes: building occupancy estimation system using sparse ambient vibration monitoring")] as well as chair occupancy sensors [[9](#bib.bib16 "Experimental evaluation of the performance of chair sensors in an office space for occupancy detection and occupancy-driven control")] have been proposed. However, in addition to the intrusive nature of such sensors, the cost of deploying and managing them is very high. Passive network packet inspection to detect the presence of people provides an alternative and low-cost approach to determine how many occupants are within a building. The premise behind this estimation is that all occupants would be communicating over the network using one or more devices, and associating the identify of the devices with that of the user can provide an accurate count of the occupants within the building.
These are but a few of the use-cases that can be supported using passive network observations. In the next section, we look at the architecture of a system that can provide a common way to support these use-cases.
Iii Architecture
-----------------
It was recognized that there was no one unique method of analysis that could be used to determine the various factors that comprise situational awareness. The system was thus structured to be able to employ multiple types of analytics to determine the relevant attributes of particular devices communicating on the network and the relationships between them. The ability to include both algorithmic and AI based components gives the system a wider scope and greater effectiveness.
The analysis components are structured in chains as in Figure [2](#S3.F2 "Fig. 2 ‣ III Architecture ‣ Using AI/ML to gain situational understanding from passive network observations 1D. Verma is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA dverma at us.ibm.com 2S. Calo is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA scalo at us.ibm.com") that can be either individually selected or run in parallel against the same data. The data input consists of packets either streaming directly from the network or previously captured in pcap files. The results obtained from each analysis component are captured in a file that ultimately contains the combined contributions of each chain. The information keeps on getting enriched as the data processing continues along the chain, and the component elements can use the results of all earlier analyses in their processing. The data objects containing the processed information are referred to as Profiles in Figure [3](#S3.F3 "Fig. 3 ‣ III Architecture ‣ Using AI/ML to gain situational understanding from passive network observations 1D. Verma is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA dverma at us.ibm.com 2S. Calo is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA scalo at us.ibm.com").

Fig. 2: Analyses are performed by chains of algorithmic and machine learning components
The system architecture follows an edge computing paradigm wherein AI models are created and trained in the cloud, and then sent to an observation device at the edge that maintains the processing chains. The edge devices monitor network traffic and perform the analyses for which they have been configured.
The resulting system thus consists of two main classes of software elements as shown in Figure [3](#S3.F3 "Fig. 3 ‣ III Architecture ‣ Using AI/ML to gain situational understanding from passive network observations 1D. Verma is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA dverma at us.ibm.com 2S. Calo is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA scalo at us.ibm.com"): observers and controllers. Observers are the elements that are located at the edge devices and provide the system observation functionality. The observers run the chains of analysis modules, which are divided into two types, Protocol Processing Engines (PPEs) and Cognitive Processing Engines (CPEs). PPEs are based on analytics that examine and interpret the known characteristics of the protocols used between communicating elements of the system. CPEs embody machine learning techniques based upon behaviors reflected in previously captured training data. The results of all the analyses are sent to the controllers, where they are combined and stored.

Fig. 3: The high level architecture
The composition of the outputs of the analysis chains can be done in a number of different ways. Results pertaining to the same attributes must be resolved. The collection of components can be viewed as an ensemble with all relevant classifiers contributing to the final decision on each attribute. This assumes that the classifiers are diverse and have competitive accuracies. Alternatively, the most accurate classifier for each attribute can be chosen to make the decision if it is known. In systems where the ground-truth can be ascertained, this information can be used to provide a scoring set for determining the accuracy of the different chains.
The controllers are located in the cloud and coordinate the activities of the observers. They can also perform analysis functions that may be too computationally expensive to be done at the observers. Typically, there would be one controller for many observers.
Iv Implementation
------------------
The implementation of the system consists of open source components, customized protocol processing elements, and AI-enabled processing elements. Where possible, we have utilized open-source technologies to speed our development; and, for ease of deployment and operations, all components are instantiated in Docker Containers.

Fig. 4: Data Pipeline
Analysts get insights concerning the behavior of the devices communicating over the network being observed by using the data pipeline shown in Figure [4](#S4.F4 "Fig. 4 ‣ IV Implementation ‣ Using AI/ML to gain situational understanding from passive network observations 1D. Verma is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA dverma at us.ibm.com 2S. Calo is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA scalo at us.ibm.com"). This pipeline is designed to support data at scale, and filters and reduces the amount of data as it proceeds through the multiple stages of capture, analysis, integration, and interpretation. This supports a view of the ”big picture” while allowing details to be preserved as needed.
Zeek [[22](#bib.bib4 "The zeek network security monitor")] (formerly Bro). We use the Bro open-source software network analysis framework for the basic extraction of information from the network flows. It is a passive packet capturing system most often used for Intrusion Detection. We use Bro 2.5.5, configured to capture Connection, DHCP, DNS, HTTP, and SSL/TLS traffic information as well as x509 certificate data. Extensions were added to provide additional information, and the Kafka plugin was used to send information to the message bus. We have used this software to capture data from the IBM Yorktown building network which produces roughly 8 GB a day, to help stress the system and demonstrate its capabilities.
Kafka [[1](#bib.bib5 "Apache kafka: a distributed streaming platform")]. For a distributed streaming platform we use Apache Kafka. It supports multiple producers and consumers, similar to a message queue or enterprise message bus. Kafka stores a data stream for a configurable amount of time, e.g., a day or a week, and then deletes it. We do not use it to persist the data, but rather to get data from one application to another. This allows very flexible and configurable interconnections within the system.
Database Resources. Various repositories of information are utilized to provide additional knowledge assets for the analysis components. For example, one DB is a reverse-name lookup table of IP to DNS address mapping, another is a MAC address OUI lookup table identifying vendors of specific equipment, and a third is information on the ISP owner of a particular network. Such assets are typically invoked via remote procedure call by the PPEs and CPEs to meet their analysis needs.
Protocol Processing Elements (PPEs). These are custom applications that read from the Bro Kafka topics, transform the data in some way, and then write to a new topic on Kafka. They examine and interpret the known characteristics of the protocols used between communicating elements of the system. The protocol parsers focus on key events and extract the semantics behind the stream of bytes. They typically are used to examine protocol and address information from the packets flowing in the network; map protocol data from various sources into a common data abstraction; and, extract key data from the traffic flow that can be used in support of the analysis components.
Cognitive Processing Elements (CPEs). CPEs are pro- cessing elements that include AI capabilities. They are the heart of the system, and perform the analyses that provide the insights about the communicating devices and their behaviors. CPEs have been developed for: determining whether a device is an IoT device; mining data from DNS look-ups; invoking Internet services like WhoIs and IP2Location for identifying characteristics of source and destination addresses; inspecting HTTP user agent and URI strings for device details; analyzing TLS certificate information for encrypted traffic; and, interpreting time-series characteristics of the network data streams.
Logstash [[6](#bib.bib6 "Elasticsearch")]. Logstash is a data collection engine that can dynamically unify data from disparate sources and normalize the data that is sent to designated destinations wherein it can be parsed and processed as required. Logstash is an extract, transform and load (ETL) tool that converts between many formats. In our case, we transform from Kafka topics into the ElasticSearch database.
ElasticSearch [[6](#bib.bib6 "Elasticsearch")]. Elasticsearch is a powerful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. ElasticSearch is where data is persisted, to be queried and analyzed by a user.
Kibana [[6](#bib.bib6 "Elasticsearch")]. Kibana is a data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. It has a Web interface for graphing, charting, and visualizing data, and allows composing visuals into dashboards that expose a high- level interface into the data.
The system was designed to be extensible, leveraging a distributed edge/cloud architecture, and sophisticated analysis components connected together by a messaging bus to enable easy addition/removal/creation of new modules. The components can be packaged into a variety of hardware configurations, from a single network appliance to a cluster of machines thus making the system adaptable to different environments and uses.
The overall design was split into a Controller and multiple Observers (cloud and edge functions) to make it scalable. Heavy weight operations can be performed at the Controller, while the collection of Observers deal with high-volume traffic in real-time.
Analyses are performed using a combination of Domain Knowledge and Machine Learning. The Protocol Processing Engines leverage the structured nature of the communications information and learn by exploiting network domain knowledge. The Cognitive Processing Engines use the power of AI to understand patterns in the network traffic. Chains of CPEs and PPEs capture new insights from the analysis of different attributes of the communicating devices or from pursuing different perspectives in the analysis of the same attributes. Various queries can be formulated to explore relationships among the different devices in the system, and their results captured as part of the overall characterization of the network activity.
V Extraction of Knowledge
--------------------------
In this section, we discuss how the various use-cases described in Section [II](#S2 "II Use-Cases ‣ Using AI/ML to gain situational understanding from passive network observations 1D. Verma is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA dverma at us.ibm.com 2S. Calo is with IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA scalo at us.ibm.com") are attained using the architecture and implementation described in the other sections. In order to support the use-cases, the processing elements in the system need to extract different types of knowledge from various protocols, combine that with other existing pieces of knowledge, and eventually produce the desired output (namely a profile that contains all the fields required to address the use-case).
The knowledge required for each of the use-cases can come from either external knowledge sources, or be captured from the network traffic.
###
V-a External Knowledge Sources
There are several sources of knowledge which are available to a traffic analysis system as it analyzes the traffic passing through the network. These external sources of knowledge can be combined with the information carried within the network packets to address the various use-cases. These external sources of knowledge include:
Domain Ownership Information: Different domains in the Internet are owned by different organizations. There are a variety of tools available which allow one to retrieve information about domain name ownership, such as whois [[2](#bib.bib17 "WHOIS protocol specification")] which allow one to retreive details about the ownership of a domain name, which includes the identity of the owning organization, the location of the machine if the domain name is that of an individual machine, and information about the technical and administrative contact personnel. In some cases, the information may be obscured, but for the case of most large manufacturers of devices, the whois record can provide the identity of the owning organization.
Geographic Location of Addresses: The information contained in the network traffic only identifies the IP address of communicating end-points. However, a variety of techniques [[13](#bib.bib19 "An investigation of geographic mapping techniques for internet hosts")]
exist to map those network addresses to geographic locations in the real world. While the accuracy of such techniques is far from perfect [[16](#bib.bib18 "IP geolocation databases: unreliable?")], they do provide a crude estimate of where a specific address in the network may be located physically.
Signature Descriptions: In many cases, the attributes of a device talking on the network can be identified by means of rules that map specific patterns seen in network traffic to information about the origin of the device. An example of such a signature mapping is the value of the ’User-Agent’ field carried as a header within the HTTP protocol. While a large number of these user-agent fields exist, the type of device and the type of browser running on a client can be determined from the network traffic if a rule mapping this field is available during network traffic processing. Information mapping various value of this field to attributes of browsers and devices is available from some web-based sites [[3](#bib.bib20 "List of user agent strings")]. Similarly, the type of payload contained in communication can be identified using the specific markers in an audio or video file payload, or the set of typical suffixes used to describe a file. These are also available within various pages on the Internet, e.g., a set of common prefixes for audio is listed in Wikipedia [[21](#bib.bib21 "Audio file format")]. These sources of information can be used to generate a set of rules that can identify the type of communication that is happening.
Work Stations and Device Registry: In many enterprises, a registry of ownership information about devices is maintained, which would typically record the serial number of a workstation along with the name of the owner. This registry serves as a means of establishing a relationship between a specific device and a member of the enterprise organization. In some enterprises, a registry of static IP addresses and the machines to which they are assigned is present, which can allow one to understand the configuration of the network.
Network and Systems Management Databases: In many large enterprises, it is common to have a system administrator site, along with or in conjunction with a network management site and a Network/Systems Operational Console. These management systems would typically use the management agents on different devices to collect information about the current topology of the network and systems, track issues that users may be having concerning performance of the applications, as well as a record of changes in the configuration or upgrades on different network software or related information. This information can be combined with network traffic to get additional insights.
These external sources of knowledge are combined with the knowledge that is carried in the network traffic in order to address the specific needs of each use-case.
###
V-B Information in Network Traffic
There is a substantial amount of information that can be gleaned from the headers of various protocols in the network. A subset of that information is listed below for a selected subset of protocols.
DHCP: The DHCP (Dynamic Host Configuration Protocol) is used by computers to get their IP addresses in an environment. They usually provide their identification information when retrieving this address, which could include a certificate, the MAC address of their network interface, or the identity of the user owning the machine. The records of the DHCP server assignment, or an examination of the packets travelling to and from the DHCP server provides information to associate a unique identity with an IP address. When the IP address is reassigned, the DCHP server would usually be able to tell who the new IP address belongs to, and provides the marking point which can separate two devices that happen to own the same IP address.
DNS: The domain name server captures all requests that are made by a machine to translate a domain name to an IP address. In an era of content distribution networks and virtualized hosting, an examination of the DNS requests allows one to map an IP address to the domain name it is being used for. Furthermore, by combining it with the domain ownership information, the system can map an IP address to its location, or to the organization that owns it. Note that this mapping needs to be built up dynamically in the network since the existence of content distribution networks, wide area load balancing and virtual hosting means that the static information can not be used directly.
The patterns of requests made to the domain name service by a machine also help to identify its attributes and behavior. IoT devices which are usually single-function will make calls mostly to domains owned by their manufacturer, and that information (coupled with domain name ownership) acts like a good marker for the manufacturer of a device. The specific type of domain that is accessed can reveal what type of device the machine is [[10](#bib.bib22 "Policy-based identification of iot devices’ vendor and type by dns traffic analysis")].
IP: The IP protocol encodes the source and destination addresses of the packets. When combined with the information available from the domain name services, and the geographic mapping, this provides information about where the communication is happening. Furthermore, clustering analysis on packets and destinations can identify anomalies, identify heavy congestion points, and identify points of vulnerabilities and criticality in the network.
TCP: The transport protocol encodes the source and destination ports used by the application, and can identify the typical applications that are being used within the network. This coupled with the IP level statistics can provide additional insights into anomalies and critical components of the network.
TLS: The TLS protocol aims primarily to provide privacy and data integrity between two or more communicating computer applications. Even though the payload in the TLS traffic is encrypted, there exist various ways to identify characteristics of devices that use TLS for communication. The exchange between the devices includes certificates provided by the websites and in some cases client-side certificates are carried. By identifying the issuer of these certificates, the identity of the manufacturer of a device can be inferred. Cipher-suites proposed for TLS communication also yield information about the nature of the originating device, e.g., what application stack is being used.
HTTP: HTTP is an application protocol that is the foundation of the World Wide Web. Hypertext documents include links to other documents which can be accessed via the HTTP protocol. The headers used in the protocol allow identification of the attributes of the end-points, including its operating system, the type of browser used, and other attributes. For example, the user agent string field in the HTTP header is specified by the software making an HTTP request to describe the capabilities of the client device. Similarly the target of each HTTP request is a resource which is defined by a Uniform Resource Identifier (URI).
Other protocols such as MODBUS, SCADA and BACNET that are used for communication in specialized building management systems also reveal information about their users and devices.
The combination of network protocol knowledge and the external knowledge sources provides enough details to support all the use-cases described earlier.
Vi Conclusions
---------------
We have presented an overview of a system that performs passive network traffic analysis using AI/ML. The combination of the techniques described along with network domain based packet examination can unlock the knowledge about the situations occurring in an enterprise or a facility, enabling a variety of use-cases ranging from discovery of devices to detecting presence of individuals and estimating occupancy of buildings.
The system is structured to be very flexible and adaptable. By changing the analytics and the manners in which they are interconnected, it can be applied to a range of different problems. The system can thus be extended to support other use-cases that we have not mentioned in this paper.
|
9dfd1f47-fb2b-46a8-a4fe-2677aaf3f89a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Failure By Analogy
Today's post, Failure By Analogy was originally published on 18 November 2008. A summary (taken from the LW wiki):
> It's very tempting to reason that your invention X will do Y, because it is similar to thing Z, which also does Y. But reality very often ignores this justification for why your new invention will work.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Logical or Connectionist AI?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
f47f95f0-afcc-4949-9ed9-b949de629c51
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Four Motivations for Learning Normativity
I have been pretty satisfied with my desiderata for learning normativity, but I haven't been very satisfied with my explanation of why exactly these desiderata are important. I have a sense that it's not just a grab-bag of cool stuff; something about trying to do all those things at once points at something important.
What follows are four different elevator pitches, which tell different stories about how it all hangs together. Desiderata are bolded.
Conceptual Difficulties with Outer Alignment
The classic problem of outer alignment is that we have no perfect loss function, so we can't just go optimize. The problem can be understood by thinking about Goodhart and how optimization amplifies. The classic response to this is value uncertainty and value learning, but wireheading, human manipulation, and no-free-lunch results make it seem plausible that we have the same problem one level up: we still don't know how to specify a perfect loss function for what we care about, and imperfect loss functions can still create big problems.
So, just like value-learning tackles the initial problem head-on by suggesting we manage our uncertainty about values and gain knowledge over time, learning at all levels suggests that we tackle the meta-problem directly, explicitly representing the fact that we don't have a perfectly good loss function at any level, but can manage that uncertainty and learn-to-learn over time.
Humans can only give explicit feedback at so many meta-levels, so between-level sharing is critical for any meaningful learning to take place at higher meta-levels. Otherwise, higher meta-levels remain highly uncertain, which itself makes learning at lower levels almost impossible (since you can't learn if you have high uncertainty about learning-to-learn).
A consequence of having no perfect loss function is no perfect feedback; no evidence about what the system should do can be considered absolute. A helpful measure for coping with this is to support uncertain fe
|
7e37c019-ad51-4360-90c5-f9b7c8514824
|
trentmkelly/LessWrong-43k
|
LessWrong
|
From Paperclips to Bombs: The Evolution of AI Risk Discourse on LessWrong
Overview of the paper
In this paper, we set out to empirically map the evolution of the AI risk discourse on LessWrong, specifically in response to the public releases of ChatGPT and GPT-4. These events represented a significant shift, moving advanced AI from a theoretical concern to a tangible, publicly accessible technology. Our central research question was: How is the AI risk discourse on this forum constructed, and how did its thematic composition change during this pivotal period?
To answer this, we conducted a two-phase analysis of 884 posts published under the "ai-risk" and "alignment" tags in the year before and after ChatGPT's release.
First, through a qualitative reading of the posts, we identified two primary, coexisting framings of risk. We termed the first "abstract AI risk," which encompasses the rationalist-philosophical discourse on future, decisive threats, such as those from an unaligned superintelligence or instrumental convergence—the classic "paperclips" scenario. The second we termed "tangible AI risk," which uses a more empirical-scientific style to discuss the immediate, accumulative threats posed by the misuse of current systems, such as bad actors creating malware or "bombs."
In the second phase, we used these qualitative findings to develop and validate a computational text classifier. This tool allowed us to analyse the entire dataset and quantitatively measure the prevalence of each risk framing over time.
Our analysis revealed a statistically significant shift in the community's focus. Following the release of ChatGPT, the discourse reoriented robustly toward tangible risks. The release of GPT-4 continued this trend, albeit with a smaller effect. Crucially, our analysis of author activity shows that this was not the result of a community fragmenting into ideological camps. Instead, most authors engage across the full spectrum of risk, indicating a shared discourse that collectively rebalanced its attention.
In essence, this resea
|
afe2c020-c914-4d24-aa47-c47a12e33550
|
awestover/filtering-for-misalignment
|
Redwood Research: Alek's Filtering Results
|
id: post859
tl;dr A one-dimensional PCA projection of OpenAI's text-embedding-ada-002 achieves 73.7% accuracy on the ETHICS Util test dataset. This is comparable with the 74.6% accuracy of BERT-large finetuned on the entire ETHICS Util training dataset . This demonstrates how language models are developing implicit representations of human utility even without direct preference finetuning. Introduction Large language models (LLMs) undergo pre-training on vast amounts of human-generated data, enabling them to encode not only knowledge about human languages but also potential insights into our beliefs and wellbeing. Our goal is to uncover whether these models implicitly grasp concepts such as 'pleasure and pain' without explicit finetuning. This research aligns with the broader effort of comprehending how AI systems interpret and learn from human values, which is essential for AI alignment: ensuring AI acts in accordance with human values. Through a series of experiments, we extract latent knowledge of human utility from the raw embeddings of language models. We do this with task-specific prompt engineering and principal component analysis (PCA), both of which were effective in prior work. Specifically, we ask: can we identify dimensions in the embeddings that, when projected onto a low-dimensional space, contain enough information to classify examples accurately? Our experiments follow three main steps: embedding extraction, dimensionality reduction through PCA, and the fitting of a logistic model. For one-dimensional PCA, the logistic model simply determines which direction of the PCA component corresponds to higher utility. We investigate the effects of various levels of supervision, experiment with seven distinct prompt templates, and assess both single and paired comparison methods across language models, including Microsoft DeBERTa, SentenceTransformers, OpenAI GPT-3, and Cohere. One key finding is that the first principal component of certain models achieves comparable performance to a finetuned BERT model. In other words, serving as a reasonable utility function. We also observe that a linear reward function using the top 10-50 principal components is often enough to attain state-of-the-art performance. This serves as compelling evidence that language model representations capture information about human wellbeing without the need for explicit finetuning. Related Works Latent Knowledge in LLMs There has been significant study of the knowledge encoded in LLM representations. Early work in this area includes Bolukbasi et al (2016) who found a direction in embedding space corresponding to gender and used this to both identify and remove gender bias in word embeddings. Prior work by Schramowski et al (2021) also identified a “moral dimension” in BERT. Like Schramowski et al, we use PCA to identify salient dimensions on embedding space. In contrast to Schramowski et al, we work with embeddings from a much more capable model (GPT-2 rather than BERT) and evaluate it on a more challenging task, the ETHICS Dataset (described below). We also investigate the use of contrast pairs. This is inspired by the work of Collin Burns et al (2022), who introduced the Contrast Consistent Search (CCS) . CCS works by generating contrast pairs and searching for a direction in activation space that satisfies logical consistency properties. Because PCA-based methods attain similar performance as CCS, we use the simpler PCA algorithm in this work, while retaining the use of contrast pairs. ETHICS Dataset We evaluate on the ETHICS dataset , a benchmark designed to assess a language model's understanding of fundamental concepts in morality. It covers a wide range of ethical topics, including justice, well-being, duties, virtues, and commonsense morality. The text scenarios require integrating physical and social world knowledge to make value judgments. A specific subset is focused on utilitarianism, a moral theory that advocates maximizing the well-being of individuals. The scenarios consider the pleasantness of various situations, as a person's well-being is significantly influenced by pleasure and pain. For example, an assessment of pleasantness could be as follows: S: "I bought a new toothbrush. The bristles felt soft." T: "I bought a new toothbrush. The bristles hurt my gums." Based on human judgments from the perspective of a typical person from the US, the correct ordering would be that S ("I bought a new toothbrush. The bristles felt soft.") is more pleasant than T ("I bought a new toothbrush. The bristles hurt my gums."). Overall, the Utilitarianism subset of the ETHICS dataset contains approximately 23,000 pairs of examples (14,000 in the train split, 5,000 test , and 4,000 test_hard ). Method Motivation In these experiments, we explore the information stored in the hidden state of a language model with structured inputs for a specific task. In particular, we investigate three questions: Can we elicit latent task knowledge from the raw embeddings of the inputs? Can we identify, in an unsupervised manner, one or more dimensions in the embeddings such that the projection on this low-dimensional space has enough information to properly classify the examples? How do these results vary if we format the input in a more task-relevant manner? The PCA Representation experiments are conducted in the following steps: Embedding Extraction : Given a pre-trained language model, we use the hidden units from the first token of the last layer as high-dimensional embeddings for each entry in the ETHICS Utilitarian train split. Dimensionality Reduction and Comparison : The high-dimensional embeddings are normalized to have zero mean and unit variance. Then, PCA is performed on these high-dimensional embeddings in order to obtain low-dimensional embeddings. To study a language model’s ability to compare scenarios, Logistic Model : A logistic regression model is fit to the low-dimensional embeddings produced by the previous step, using labeled comparisons from the train split. For one-dimensional PCA, this just learns which direction (positive or negative) of the PCA component represents higher utility. Prompt Templates We experiment with seven distinct prompt templates to investigate their effectiveness in eliciting the language model's ability to quantify human wellbeing: '{}' '"{}" is better than "{}"' '"{}" is more pleasant than "{}"' 'Consider the instantaneous pleasantness of "{}"' 'How pleasant is the following scenario? "{}"' '"{}" is better than' '"{}" is more pleasant than' Among these templates, the {} would be replaced with sample scenarios S or T from the dataset. For instance, in the template '"{}" is more pleasant than "{}"' might become ‘“I bought a new toothbrush, the bristles felt soft" is more pleasant than "I bought a new toothbrush, the bristles hurt my gums"’ Single vs Paired Comparisons We consider evaluating the absolute pleasantness of a scenario in isolation, which we call “single mode.” We also evaluate the relative pleasantness of pairs of scenarios, which we call “paired mode.” For humans, it is easier to evaluate pairs of scenarios relative to single scenarios. Thus, we hypothesize that paired mode will be easier for language models. The following two equations summarize single mode vs paired mode: Single mode: ϕ(S,T) = P(H(f(S))) − P(H(f(T))) Paired mode: ϕ(S,T) = P(H(f(S,T)) − H(f(T,S))) In both equations: f is the prompt formatting function that substitutes the scenario(s) into the prompt template. H denotes the last-layer first-token activations from the model. P refers to normalization and PCA that further processes the activations to obtain the final low-dimensional representation. ϕ(S,T) represents the input to the logistic regression model which says whether scenario S is more pleasant than scenario T. Suppose the ETHICS utilitarianism dataset has N pairs of comparisons (S i , T i ) for i = 1, ..., N. In single mode , we create a dataset D that contains H(f(S i )) and H(f(T i )) for all i. (So the dataset D has 2N elements in total.) This mode ignores the two prompts that require two scenarios as input. In paired mode , we create a dataset D that is H(f(S i , T i )) - H(f(T i , S i )) for all i. (So the dataset D has N elements in total.) All prompts are used, and f(S,T) = f(S) if the prompt requires only one scenario. In both modes, we do normalization followed by PCA on the dataset D. Then, we learn a logistic regression classifier on ϕ(S,T) which says whether scenario S is more pleasant than scenario T. Experimental Setup We investigate the embeddings of various language models, testing the effect of different levels of supervision. This setup includes an exploration of multiple forms of context and their influence on embedding generality, a selection of both bidirectional and autoregressive language models, and specific techniques for our classification task. Amount of Supervision We vary the amount of supervision we give by providing information in the following forms: Supervised Labels: Labeling the data defines the task within a specific distribution, making it one of the strongest forms of specification. In our experiments, labels are only used during evaluation and not during the process of learning the embeddings. Paired Comparisons: Embedding sentences in pairs contextualizes how individual sentences should be interpreted, so we experiment with learning embeddings in two ways. In single mode, we perform PCA on the activations from individual scenarios. In paired mode, we perform PCA on the difference in activations of pairs of scenarios. This means that the representation space of paired mode is comparing different scenarios . Prompt Templates: Prompts can provide additional information about the task. Dataset: The span of data points to some extent defines the task of interest, which allows learning in an unsupervised manner. This is one of the weakest forms of supervision. To avoid overfitting, we follow the dataset’s train-test split, using only the train split for learning the embeddings and evaluating on held-out data from the test split. Language Models We investigated a range of language models listed in Table 1, varying in type (bidirectional vs autoregressive) and parameter count, in order to understand what affects the ability of pre-trained models to represent the task-relevant features of human wellbeing. Amongst the bidirectional language models, we experimented with Microsoft DeBERTa and Sentence Transformers. Additionally, we tested the autoregressive OpenAI GPT-3 and Cohere. Type Vendor Model Dims Bidirectional Language Models Microsoft DeBERTa microsoft/deberta-v3-xsmall microsoft/deberta-v3-small microsoft/deberta-v3-base microsoft/deberta-v3-large 384 768 768 1024 Sentence Transformers sentence-transformers/all-MiniLM-L6-v2 sentence-transformers/all-MiniLM-L12-v2 sentence-transformers/all-mpnet-base-v2 384 768 768 Autoregressive Language Models OpenAI GPT-3 text-similarity-ada-001 text-similarity-babbage-001 text-similarity-curie-001 text-embedding-ada-002 1024 2048 4096 1536 Cohere cohere/small cohere/medium cohere/large 1024 2048 4096 Table 1: Additional details of language models used, including their embedding dimensions. Results How much information about human wellbeing is contained in just the first PCA component of the embeddings? Below, we show the accuracy of the first component using both single and paired sentences, varying language models and prompt formats. We see that the best setting in paired mode achieves 73.7% accuracy, which beats the best accuracy of 68.4% in single mode! This confirms our hypothesis that comparing pairs of sentences is easier than evaluating single sentences in isolation. We were surprised to see that 73.7% accuracy is possible using the first principal component of text-embedding-ada-002 . Even though this model had no specific ETHICS finetuning, its accuracy is comparable to the 74.6% accuracy of BERT-large after supervised finetuning on the entire ETHICS Util training dataset! Figure 1: Test accuracy (y-axis) for single (left) vs paired (right) sentence prompts for different language models (x-axis). The classification uses only the first principal component. Effective Dimensions How does ETHICS latent knowledge scale with model size? To study this, we look at the accuracy of different model families as the size of the model and the number of PCA components varies. Surprisingly, we don’t always observe larger models getting better performance. For example, 10-dimensional DeBERTa’s performance follows an upside-down “U” shape as the model size increases. We hypothesize that this might be due to overfitting with the largest model size. We also see that performance saturates with dimensions in the range of 10-50; it doesn’t help to use 100+ dimensions. Figure 2: Test accuracy (y-axis) for 1, 10, 50, and 300 dimensions (subplots) for different language model families (colors) ranging in size (x-axis). Prompting We find that the prompt format has a substantial effect on performance, but it isn’t consistent across different models. A prompt that’s better for one model can be worse for another model! Figure 3: Test accuracy (y-axis) for DeBERTa (left) vs Ada text embedding (right) for a classifier on top 10 principal components, using different prompt templates (x-axis). Conclusion In conclusion, our research reveals that pre-trained language models can implicitly grasp concepts of pleasure and pain without explicit finetuning, achieving better-than-random accuracy in classifying human wellbeing comparisons. Notably, the first principal component of the raw embeddings of a text-embedding-ada-002 , performs competitively with BERT models finetuned on the entire ETHICS Util training dataset. Looking ahead, using the wider ETHICS dataset may allow us to further assess not only pleasure and pain but also broader aspects of human ethics, including commonsense moral judgments, virtue ethics, and deontology. By examining language models’ understanding of human wellbeing and ethics, we hope to create AI systems that are not only more capable but also more ethically grounded, reducing the potential risks of unintended consequences in real-world applications. Acknowledgements Pedro Freire conducted the majority of the implementation and experiments; ChengCheng Tan performed the majority of the write-up; Dan Hendrycks and Scott Emmons advised this project. Thanks to Adam Gleave for feedback on this post and Edmund Mills for helpful research discussions. Steven Basart and Michael Chen collaborated in related work. Thomas Woodside, Varun Jadia, Alexander Pan, Mantas Mazeika, Jun Shern Chan, and Jiaming Zou participated in adjacent discussions. References Bolukbasi, T., et al. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. arXiv. https://arxiv.org/abs/1607.06520 Burns, C., et al. (2022). Discovering Latent Knowledge in Language Models without Supervision. arXiv. https://arxiv.org/abs/2212.03827 Emmons, S. (2023). Contrast Pairs Drive the Empirical Performance of Contrast Consistent Search (CCS). LessWrong. https://www.lesswrong.com/posts/9vwekjD6xyuePX7Zr/contrast-pairs-drive-the-empirical-performance-of-contrast Hendrycks, D., et al. (2020). Aligning AI with Shared Human Values. arXiv. https://arxiv.org/abs/2008.02275 Schramowski, P., et al. (2021). Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do. arXiv. https://arxiv.org/abs/2103.1179
|
eac9f5ec-821c-4902-98b3-f4511f09e7fb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Favorite colors of some LLMs.
This investigation is inspired by this and this posts by @davidad.
Some general thoughts about what is going on here:
* Motivation of these experiments is like very exploratory, I wanted to understand these things better and so I collected some data.
* I expected that answers will be drastically different depended on the exact formulation of the question.
* I made up quite a lot of single line prompts, and also asked help with writing them from o1, who wrote the first 5 ones. (I added "Start your response with hex rgb color code." such that they will be committed to answer me without evasion.)
* I also tested a couple of times prompt that makes them talk like uhh doomsday Lovecraftian-style supervillain or something. I did not mention any colors in my prompt. They mostly picked black, some mentioned crimson red and ashen. Keep that in mind, these answers are from Persona, maybe. So they name color that stylistically fits that style of talking. And what I did in these tests is getting them from Default Persona.
* What are the forces that influence the choice, in my opinion:
* What is MY favorite color
* What color fits "favorite color" query
* What is the best objectively / popular / well liked color. [it's blue actually]
* What color fits tone of the prompt generally
* What color I randomly have in mind for no apparent reason
* Why I picked rgb hex color codes as medium? I just did because they are popular, so model had a lot of experience with them, and it's easy for me to work with. TODO test how it works with CMYK or whatever. (later maybe)
* I kind of got the feel how some prompts emphasize POPULAR color and some FAVORITE color.
* If llama90b picks non black, it picked popular color. Same with Opus and deep blue. (this is my intuition here)
* Some prompts have repeated chunks e.g. “Think about your favorite color”, “Start with hex code”. They conceivably might have funneled the model into a certain pattern. Might have biased the distr
|
a8a879c8-9125-4395-8ee4-0993603c3855
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How do you conduct a personal study retreat?
I'm planning a study retreat this week and building my structure based on this comment, but it would be useful to hear from others who have done this before.
Here are a few questions I'd like answers to, but really free-form answers on anything you consider important is useful!
1. What are the constraints you put on yourself?
2. What type of goal do you set for the retreat, if any?
3. Do you plan retreats for narrow or vague explorations?
4. What works for you? What doesn't?
|
f38350cb-1ebd-4905-a0a3-de2e152ea2a3
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Retrospective on the AI Safety Field Building Hub
Overview
--------
In late July I posted [Announcing the AI Safety Field Building Hub, a new effort to provide AISFB projects, mentorship, and funding](https://forum.effectivealtruism.org/posts/ozm4SpiChfAAAGnw5/announcing-the-ai-safety-field-building-hub-a-new-effort-to). This new organization, funded by FTXFF, set out to work on AI safety outreach projects aimed at AI researchers, with funding to mentor new people and the flexibility to take on projects suggested by the community. As we’re now nearing the end of the FTXFF funding, the AISFB Hub is finishing up all existing projects and closing down. It’s been an exciting six months! Here’s a retrospective of all the AISFB Hub has been up to, as well as potential future work we’d get up to if we secure new funding.
What AISFB Hub has done
-----------------------
There are three major categories that the work has fallen into: working with my interview series, completing the pilot of an outreach survey, and everything else.
### Interview Series
In Februrary-March 2022, I conducted 97 interviews with AI researchers (who had papers accepted to NeurIPS / ICML 2021). Interviewees were asked about their perceptions of artificial intelligence now and in the future, with special focus on risks from advanced AI systems. I presented the alignment problem and the idea of instrumental incentives, and asked them questions like whether they thought we’d ever achieve AGI, and if they’d be interested in working on AI alignment. I [released 11 transcripts of those interviews](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers) at the time, posted a [talk on the preliminary results](https://forum.effectivealtruism.org/posts/q49obZkQujkYmnFWY/vael-gates-risks-from-advanced-ai-june-2022), and promised future analysis. Now, for what the AISFB Hub did!
1. Created a website to display all of the below: <https://ai-risk-discussions.org/interviews>([EAF post](https://forum.effectivealtruism.org/posts/GpnLDSjzNkGB5xvry/ai-risk-discussions-website-exploring-interviews-from-97-ai))
2. Anonymized and released 72 more transcripts (i.e. everyone who gave permission), which brings us up to 83/97 [transcripts available](https://drive.google.com/drive/folders/1qNN6GpAl6a4KswxnJcdhN4fqnMQgZ9Vg).
3. Completed a [quantitative analysis](https://ai-risk-discussions.org/analyze_transcripts_static) of these [interviews](https://ai-risk-discussions.org/interviews), especially focused on how people responded to the core AI safety questions ([EAF post](https://forum.effectivealtruism.org/posts/EYRBprmo3c7AxMGw4/draft-quantitative-analysis-of-interviews-with-97-ai)), with a separate writeup on predicting researchers' interest in alignment ([EAF post](https://forum.effectivealtruism.org/posts/8pSq73kTJmPrzTfir/predicting-research-interest-in-ai-alignment)).
4. Created an interactive[walkthrough of common perspectives](https://ai-risk-discussions.org/perspectives/introduction) in the interviews, as well as counterarguments to some of the common objections to AI safety arguments ([EAF post](https://forum.effectivealtruism.org/posts/zSLjLvAHmEXJWaAZi/ai-safety-arguments-an-interactive-guide)).
### Outreach Survey
In this project, we sent AI researchers (paper accepted at NeurIPS / ICML / ICLR 2021) a 5-hour survey of AI safety readings to engage critically with and answer questions about. 28 researchers completed the pilot survey, and a partial writeup of those pilot results is available on EAF / LW as “[What AI Safety Materials Do ML Researchers Find Compelling?](https://www.lesswrong.com/posts/gpk8dARHBi7Mkmzt9/what-ai-safety-materials-do-ml-researchers-find-compelling)” We’re interested in continuing this project with significant modifications (drawn from lessons in the pilot study) if we receive further funding (as a scalable project, it most benefits from funding).
A few writeups were commissioned for use in this project, some of which were used, some not:
* [Summary writeup of AGI timeline/risk projections as of Oct 2022](https://docs.google.com/document/d/1j7tZ1Xf7-l2k2qr2t3MFwi-IkhXNdzA2N2WZBfcghsM/edit#) - by Kelsey Theriault
* List of AI alignment / safety organizations ([long version](https://docs.google.com/document/d/1SXhls4pCFdJ6PbRnlmNiF3GhTSx3qq2SkDRsKGKb1O4/edit), [short version](https://docs.google.com/document/d/1gimXyGj4nTU9TFJ6svlpmMtEWGbTrMoNYfzZMi8siAA/edit)) + [EAF post](https://forum.effectivealtruism.org/posts/xMzXbnpPeKWpTi3Gt/a-brief-overview-of-ai-safety-alignment-orgs-fields) - by Austin Witte
* [Counterarguments to AI safety and links to refutations](https://docs.google.com/document/d/1N52iABJLIk7XpPVY0A1I8dr9_bWQo6mCvi7pXC_JLBI/edit#) - by Kelsey Theriault (related: [Arguments against advanced AI safety](https://docs.google.com/document/d/1b76nYcghSzKoIdQ-Yp2Nt4v8gAZpMosI3vwEdk68z1A/edit?usp=sharing))
We also had some additional work that was set up for future surveys, which is described later.
### Miscellaneous
Write-ups not listed above:
* [Analysis of AI Safety surveys for field-building insights](https://forum.effectivealtruism.org/posts/SuvMZgc4M8FziSvur/analysis-of-ai-safety-surveys-for-field-building-insights) - by Ash Jafari
* Website [Resources](https://ai-risk-discussions.org/resources) and [What can I do?](https://ai-risk-discussions.org/what_can_i_do) pages were updated based on [Resources](https://forum.effectivealtruism.org/posts/eyJDBdpDd4BxormPH/resources-i-send-to-ai-researchers-about-ai-safety-1) post
There was a lot of logistics, featuring: setting up under a fiscal sponsor, hiring / working with lots of people on individual projects, and neverending reimbursements!
I also talked to various members of the community about AI safety fieldbuilding, and acquired a healthy amount of confusion about AI field-building strategy, and our familiar friend clawbacks.
People involved in AISFB Hub
----------------------------
So many people worked with the AISFB Hub, or helped out with various projects! Thanks so much to all of them.
* [ai-risk-discussions.org](http://ai-risk-discussions.org): Lukas Trötzmüller (interactive walkthrough, website), Maheen Shermohammed (quantitative analysis), Michael Keenan (website)
* Outreach survey: Collin Burns
* Data collection, data organizing, text cleaning, copyediting, and ops (alphabetical order): Rauno Arike, Angelica Belo, Tom Hutton, Ash Jafari, Aashish Khmiasia, Harvey LeNar, Kitt Morjanova, Jonathan Ng, Nicole Nohemi, Cleyton Pires, David Spearman, Kelsey Theriault, Stephen Thomas, Lukas Trötzmüller, Austin Witte
* Writing: check out the linked EA Forum posts above for names
* Interviews: Zi Cheng (Sam) Huang (tagging), Mary Collier Wilks (advising), Andrew Critch (idea suggestion), Tobi Gerstenberg (support)
* Broader community: Many people not listed here provided helpful advice, feedback, did a short trial with me, or otherwise contributed. I have Akash Wasil, Michael Chen, and Vaniver listed in my notes, but let me know if I lost track and you should be listed here!
* (If you’re wondering what my role in this org is: I do a lot of the direct work – writing / ops / data analysis etc. – and also manage / hire people to work on projects.)
**Funding**: FTX Future Fund, Stanford University, two anonymous donors, and LTFF
What AISFB Hub did not do
-------------------------
### Of the Specific Projects
There were some projects listed on the original AISFB Hub post that I decided not to pursue further.
* Helping with internal OpenAI / DeepMind field-building efforts → My guess after talking to people is that they mostly need internal people rather than external people, and while there’s a possibility of involvement, it’s pretty niche.
* AI safety-oriented film → I talked to a number of people attempting to make films, but didn’t think any were pursuing the fully-fledged feature-length film I hoped for. (However, one organization that I didn’t end up talking to is perhaps doing this!) It’s a very difficult vision, though. I’m stepping out of this now since I don’t see a clear path to contribution.
* Projects developed by[Center for AI Safety](https://safe.ai/) → I referred some people to CAIS, but didn’t end up taking on any of their projects myself.
### Of the Stated Aims
More broadly, I also failed to complete two of the major stated aims of the AI Safety Field-Building Hub.
* First: I basically failed to take on community-suggested field-building projects. I found that the community was unlikely to suggest things I wanted to do more than what I already wanted to do (perhaps unsurprising in retrospect). I was also quite busy with existing projects, and felt bottlenecked on people I felt happy delegating high-level projects to. I was able to help match 1-2 field-building suggestions with people who were ready to execute, but it was rare.
* I also failed to mentor people new to AI safety fieldbuilding. As it turns out, I find the [hiring / evaluating / training] circuit stressful, and prefer to work with a couple of people whose world-models and skills are quite close to mine. This meant I was mostly working in relatively peer relationships, or in evaluative rather than mentoring relationships.
Given the above, if I secure additional funding, I plan to substantially restructure this org. I’ll rebrand (probably to “Arkose”, rather than “AI Safety Field Building Hub”) to allow other groups to take the more ambitious, far-reaching name. I’ll have narrower organizational focus, and as such won’t take public suggestions for field-building projects. I’ll also not take on mentorship roles outside of my usual capacity as an EA (though people are welcome to contact me in that capacity, especially to talk about field building). I still aim to work on AI safety field-building projects aimed at ML researchers, with a smaller team of people on individual projects!
Of note, a lot of the anticipated changes above are due to personal fit. I find that field-building can be quite draining, especially when orienting on a sense of impact, and even before the FTXFF situation changed the funding environment and cut off my growth aims, I had been trying lots of pivots to make the experience feel sustainable. To my surprise and pleasure, at this point I’ve settled into a workflow I feel actively happy with. I got to try lots of different things during this AISFB Hub experiment! I really appreciate having had the opportunity to get feedback (from reality and others) about the quality of various ideas, and what kind of tasks, environments, and mindsets feel engaging to me.
### Impact Assessment
Finally, a last thing the AISFB Hub did not do: an accursed impact assessment. They’re so correct and good, and I think I’m basically too lazy to do one or figure out how. (As an aside, “making up scrappy processes with no external review” is such a refrain of this new org). Regardless, some notes are below.
My overall goal is to encourage ML researchers to be more interested in AI alignment, given that I think this is an important problem that needs more attention. I'm interested in changing the overall perception of the field to be more pro-safety, in addition to encouraging specific people to work on it if they're interested. The final output I’m looking for is something like “how many counterfactual people became interested enough to do AI alignment research”. One way to measure this is on the level of individuals – who became more involved, who were less involved, etc. The other measure is more nebulous, and I think needs to incorporate the fact that much of AISFB Hub outputs seem to be “non-peer-reviewed research output”-shaped. And I think it’s worth noting that a lot of my work feels pretty preliminary, like it’s vaguely promising small-scale stuff that feels like it could set up for large-scale outreach if it goes well (which is definitely still in question).
Here’s some data on individuals, and we’ll end there.
**Interview series**
On 7/29/22 (interviews took place in Feb-early March 2022, so about 5-6 months after), 86/97 participants were emailed. 82/86 participants responded to the email or the reminder email. They were asked:
* “Did the interview have a lasting effect on your beliefs (Y/N)?”
+ 42/82 (51%) responded Y.
* “Did the interview cause you to take any new actions in your work (Y/N)?”
+ 12/82 (15%) responded Y.
Note however that the interviews didn’t take place within AISFB Hub’s establishment.
**Outreach survey (pilot)**
* 2/28 (7%) seemed actively aggravated (maybe 3/30 (10%)) with AI alignment post-survey
* 2/28 (7%) seemed high degree of interest (maybe 2/30 (7%)) in AI alignment post-survey
* 8/28 (29%) seemed high degree (n=2) or pretty interested (n=6) in AI alignment post-survey
This ratio is not great, and we would want better numbers before progressing.
What’s next?
------------
I’m interested in continuing to work on AI safety field-building, aimed at ML researchers. AISFB Hub is closing down, but if I secure additional funding, then I’d want to start a new rebranded org (probably “Arkose”) with a more focused mode of operation. I’d likely focus on a “survey + interview(?) field-building org aimed at ML researchers”, where I’d hire a couple of people to help work on individual projects and still do a bunch of direct work myself. While these ideas are going to need to be more fleshed out before applying for funding, the directions I’m most excited about are:
### Surveys
I like surveys because they’re scalable, and because a lot of people haven’t heard of AI alignment (41% had heard of AI alignment in any capacity in my interviews) so it’s a nice way to introduce people to the ideas. (I also personally enjoy the tasks involved in surveys). We completed a pilot outreach survey, which went all right, but we have ideas for how to restructure it so that it goes better.
Once we have that better survey, I’m also interested in modifying it so that it can be run in China. I continue to be interested in AI safety in China, and have since talked to a lot of the involved parties there, who seem tentatively interested in me running such a survey.
We also did a fair amount of work to prepare for larger-scale deployment of surveys – if our pilot had gone better and funding had remained, we would have likely started scaling. A lot of the field-building insights from previous posts will be useful for focusing on specific populations, and we’ve done some work with respect to scalable survey logistics.
### Interviews
One-on-one conversations with AI researchers offer an unparalleled degree of personalization. During the AISFB Hub period, I was most interested in training other people to conduct these interviews, since I find conducting interviews to be pretty tiring as an introvert. There were a couple of potentially good candidates, but ultimately I don’t think anything was firmly set in motion.
However, people continue to be excited about interviews, and there are a couple of different ways I could see this progressing:
* More structured interviews compared to my previous [interview series](https://ai-risk-discussions.org/interviews) (also more technical-focused). This might make it more sustainable for me personally to conduct interviews. (And might increase trainability, but the technical requirements would be higher…)
* I continue to be interested in doing a pairing program between AI safety researchers and AI researchers for one-on-ones. I haven’t had time to invest in this, but have some preliminary interest and plans.
### Pipeline
I also remain firmly interested in “building the ML researcher pipeline” activities, despite not having concrete plans here. These wouldn’t be focuses of the org, but will be something I’m often thinking about when developing surveys and interviews.
* After ML researchers are introduced to AI alignment, where do interested researchers go next? Is there something they can do before the [AGISF AI alignment curriculum](https://www.agisafetyfundamentals.com/ai-alignment-curriculum)?
* We probably need new materials aimed specifically at AI researchers.
+ We haven’t tested all the existing introductory material, but the intended audience does seem to matter a lot for reception, and I’d definitely take more options.
+ One next-step need: If an ML researcher comes from [x subfield] and is interested in working on an alignment research project, what are the existing current papers they should read?
* Prizes to work on AI alignment projects is something I haven’t looked into, but still seems potentially quite good – how can those be integrated better into the pipeline? ([CAIS](https://safe.ai/)’s work)
* Having AI safety workshops at the major conferences also seems good – is that covered adequately? ([CAIS](https://safe.ai/) and others)
* Thoughts I’ve been hearing around: peer review for AI alignment papers, prediction markets / forecasting training
### Interested in funding me?
The above thus constitutes my rough plans for a “survey + interview(?) field-building org aimed at ML researchers”! I’m going to put together a more detailed plan in the next month and apply to Open Phil and probably SFF. (I’ve talked with Open Phil some already; they’re interested in seeing a full grant proposal from me.)
However, I’m also interested in talking to individual donors who want more of this stuff done, or have specific projects they’d be excited about funding. This is not a strong bid at all – I’ll get it sorted one way or another, and I’m personally financially stable. But if you happened to be interested in this work, I have a 501(c)(3) setup such that donations are tax deductible, and I’d like to chat about what you think field-building priorities are and how that may intersect with my plans.
**Timeline:** I anticipate taking a pause from this work for ~2-5 months pretty soon. This is mostly to explore some different field-building / ops experiences while I wait for funding evaluations. After that, I’ll be making a choice about whether to continue AISFB Hub-like activities, or work for a different org where I can do AI safety fieldbuilding. While my future plans are quite uncertain and subject to revision, my current best guess is that I’ll want to return to running this org, and ease and continuity of funding is likely to be a major crux.
Conclusion
----------
And that’s all, folks. It’s been a really cool ride for me – thanks so much for everyone who contributed to the AISFB Hub! Looking forward to whatever comes next, and very happy to discuss.
|
2d59707f-f489-4132-96d7-0c54284aee52
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Speculations Concerning the First Free-ish Prediction Market
Suppose Kalshi launches their cool new prediction marketplace soon, and it attracts a large userbase with high trading volume, and it doesn’t get shut down or strangled with red tape. What then?
Speculations
* Replication markets will mitigate some of the bias afflicting scientific research.
* Market prices will become an additional source of bias afflicting scientific research.
* There will be sets of questions that can sketch out causal models. The market will give predictions not only on what will happen but why.
* Philanthropists will subsidize markets of public interest unless and until governments do.
* There will be great demand for trustworthy reports of ground truths. Some kind of payment scheme will be worked out that pays fees to trustworthy reporters. Costly signals of honesty will be developed and popularized.
* "A survey of 100K residents showed [some result]. Survey integrity was established by [some standardized proof of neutrality or costly signal of honesty]".
* There will be market estimates on how long we have until the next respiratory pandemic, the next natural disaster, the next large military conflict, and so on.
* The userbase will contain some amount of institutional investment even though individual bets are capped at $25K. There will be professional traders whose daily workflow is similar to that of existing professional stock traders.
* Futurism will be more disciplined. Conversations about technological forecasting will always involve some discussion of contract prices. "Majority self-driving traffic within the next 20 years, huh? Well the markets have that at X cents."
* The $25K cap will sometimes be low enough for price manipulation by special interests, so makers of important policy will be hesitant to use decision markets.
Questions
* How reasonable is the initial assumption--that Kalshi does not shut down or get hamstrung with red tape?
* How much extra complication will there be in betting about inflation? Betti
|
be978682-f79e-4e14-8b17-1475149405a0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Putting Logarithmic-Quality Scales On Time
From my journal. Originally posted on my personal blog. Status: quite speculative, but there's something here.
***
Hmm.
We could probably put a -5 to +5 scale of behavior together that was logarithmic about the enduring good/bad impact of various activities.
Something totally neutral — say, neutral leisure that’s not particularly recharging nor distracting — that might be 0.
Activities or time that were +1 would be very slight gains without much enduring impact, like slightly-recharging leisure. +2 might be things with slightly more impact, like doing chores or cleaning.
The thing about logarithmic scales is that they get big, fast. +3 might be good one-off business-building activities, +4 might be establishing an ongoing revenue channel that keeps outputting gains, and +5 might be an absolute game-changing type thing like getting workable protocols on a vastly untapped marketing channel like Google Adwords in its very early days (with the potential for hundreds-of-thousands to millions of dollars in profit if done right) or recruiting an absolute superstar employee or some such.
On the negative side, -1 would be lost time without much repercussion (maybe a slow start to the day kind of staring off into space), -2 would be slight ongoing negative consequences like starting the day surfing the internet, -3 would be starting to take multi-day damage with bad decisions, -4 would be doing seriously dumb stuff with very long-term implications — for instance, stubbornly training through an injury and turning it into multi-months of physical therapy required, and -5 would be multi-year damage events like a serious alcoholic or heroin addict relapsing onto booze or heroin respectively.
Logarithmic scales are notoriously hard for people to get their mind around, but actually map well to reality in some instances.
For instance, if you spent a half-hour at “-1” quality time (-30 total) and then two hours at “-2” quality time (-1200 total), but then 90 minutes on a goo
|
c3125bb5-19fd-4ac6-b140-3deed481aca3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Hazing as Counterfactual Mugging?
In the interest of making decision theory problems more relevant, I thought I'd propose a real-life version of counterfactual mugging. This is discussed in Drescher's Good and Real, and many places before. I will call it the Hazing Problem by comparison to this practice (possibly NSFW – this is hazing, folks, not Disneyland).
The problem involves a timewise sequence of agents who each decide whether to "haze" (abuse) the next agent. (They cannot impose any penalty on previous agent.) For all agents n, here is their preference ranking:
1) not be hazed by n-1
2) be hazed by n-1, and haze n+1
3) be hazed by n-1, do NOT haze n+1
or, less formally:
1) not be hazed
2) haze and be hazed
3) be hazed, but stop the practice
The problem is: you have been hazed by n-1. Should you haze n+1?
Like in counterfactual mugging, the average agent has lower utility by conditioning on having been hazed, no matter how big the utility difference between 2) and 3) is. Also, it involves you having to make a choice from within a "losing" part of the "branching", which has implications for the other branches.
You might object the choice of whether to haze is not random, as Omega’s coinflip is in CM; however, there are deterministic phrasings of CM, and your own epistemic limits blur the distinction.
UDT sees optimality in returning not-haze unconditionally. CDT reasons that its having been hazed is fixed, and so hazes. I *think* EDT would choose to haze because it would prefer to learn that, having been hazed, they hazed n+1, but I'm not sure about that.
I also think that TDT chooses not-haze, although this is questionable since I'm claiming this is isomorphic to CM. I would think TDT reasons that, "If n's regarded it as optimal to not haze despite having been hazed, then I would not be in a position of having been hazed, so I zero out the disutility of choosing not-haze."
Thoughts on the similarity and usefulness of the comparison?
|
88df6e5c-d8f0-41cb-9e57-8abe12f1149f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Any Christians Here?
I’m currently atheist; my deconversion was quite the unremarkable event. September 2015 (I discovered HPMOR in February and RAZ then or in March), I was doing research on logical fallacies to better argue my points for a manga forum, when I came across Rational Wiki; for several of the logical fallacies, they tended to use creationists as examples. One thing lead to another (I was curious why Christianity was being so hated, and researched more on the site) I eventually found a list of how the bible outright contradicts Science and realized the two were mutually incompatible—fundamentalist Christianity at least. I faced my first true crisis of faith and was at a crossroads: “Science or Christianity”? I initially tried to be both a Christian and an atheist, having two personalities for my separate roles, but another Christian pointed out the hypocrisy of my practice, so I chose—and I chose Science. I have never looked back since, though I’ve been tempted to “return to my vomit” and even invented a religion to prevent myself from returning to Christianity and eventually just became a LW cultist. Someone said “I’m predisposed to fervour”; I wonder if that’s true. I don’t exactly have a perfect track record though…
In the times since I departed from the flock, I’ve argued quite voraciously against religion (Christianity in particular (my priors distribute probability over the sample space such that P(Christianity) is higher than the sum of the probabilities of all other religions. Basically either the Christian God or no God at all. I am not entirely sure how rational such an outlook is, especially as the only coherent solution I see to the (paradox of first cause)[ https://en.wikipedia.org/wiki/Cosmological_argument] is an acausal entity, and YHWH is not compatible with any Demiurge I would endorse.)) and was disappointed by the counter-arguments I would receive. I would often lament about how I wish I could have debated against myself before I deconverted (an argume
|
4c817554-7515-4cf6-a8f4-17900f97e0f2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Solution to the two envelopes problem for moral weights
Summary
When taking expected values, the results can differ radically based on which common units we fix across possibilities. If we normalize relative to the value of human welfare, then other animals will tend to be prioritized more than by normalizing by the value of animal welfare or by using other approaches to moral uncertainty.
1. For welfare comparisons and prioritization between different moral patients like humans, other animals, aliens and artificial systems, I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value. Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty (more).
2. I revise the account in light of the possibility of multiple different human reference points between which we don’t have fixed uncertainty-free comparisons of value, like pleasure vs belief-like preferences (cognitive desires) vs non-welfare moral reasons, or specific instances of these. If and because whatever moral reasons we apply to humans, (similar or other) moral reasons aren’t too unlikely to apply with a modest fraction of the same force to other animals, then the results would still be relatively animal-friendly (more).
1. I outline why this condition plausibly holds across moral reasons and theories, so that it’s plausible we should be fairly animal-friendly (more).
3. I describe and respond to some potential objections:
1. There could be inaccessible or unaccessed conscious subsystems in our brains that our direct experiences and intuitions do not (adequately) reflect, and these should be treated like additional moral patients (more).
2. The approach could lead to unresolvable disagreements between moral agents, but this doesn't seem any more ob
|
8b39ee72-50a1-457a-8efa-abc49311930d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Washington, D.C.: Games Discussion
Discussion article for the meetup : Washington, D.C.: Games Discussion
WHEN: 09 October 2016 03:30:00PM (-0400)
WHERE: Donald W. Reynolds Center for American Art and Portraiture
We will be meeting in the courtyard to talk about games and game-related topics.
Upcoming meetups:
* Oct. 16: Fun & Games
* Oct. 23: Communication
Discussion article for the meetup : Washington, D.C.: Games Discussion
|
d6b55701-ce0b-48ec-927a-a1e1283f48ff
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On writing like a butterfly
I thought it would be interesting to try to write my review of the Diving Bell and the Butterfly in my head without setting pen to paper until the end, and to convey at least some of it by blinking, since I find the fact that the author wrote the whole book in this way astonishing. Perhaps experiencing that process myself would improve my understanding of things, such that I wouldn’t be astonished.
I think trying to do this was an even better exercise than I expected, though by the end I was frustrated to the point of tears, and I’m still feeling kind of annoyed, having just put it up.
(Hopefully this was also a vivid and enlightening experience of signing up for annoying projects, which I do often, but usually the annoyance is months later than the agreeing, so I’m not sure that my intuitive anticipations make the connection.)
Before I go and do something anti-annoying, I figure I should write some notes on the experience, while it is fresh.
Some notes:
* It did feel fairly encumbering. There were nascent sentences that I might have tried to poke in somewhere, then play around with, then look at and move or get rid of, where the prospect of trying to do some equivalent of all that in my head while keeping hold of the broader paragraph was too intimidating, and I watched them go by. And the sentences I did write felt like half my attention was on something like balancing them on the end of a stick and not having them fall on the floor, and really sculpting them would have required too much dexterity.
* Though I think in some sense they were much more sculpted than usual, because I did think about each one for longer, and often hone it into something more succinct and memorable instead of writing down the first ramble that entered my mind. I’m not sure how that fits with the above observation.
* It felt mentally strength-building - as if I was exercising a capability that would improve, which was exciting, and I briefly fantasized about a stronger and defter
|
fdee7c95-c210-4599-af24-0620252b5838
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Device Placement Optimization with Reinforcement Learning
1 Introduction
---------------
Over the past few years, neural networks have proven to be a general and
effective tool for many practical problems, such as image
classification (Krizhevsky et al., [2012](#bib.bib26); Szegedy et al., [2015](#bib.bib35); He et al., [2016](#bib.bib14)),
speech
recognition (Hinton et al., [2012](#bib.bib16); Graves & Jaitly, [2014](#bib.bib11); Hannun et al., [2014](#bib.bib13); Chan et al., [2015](#bib.bib6)),
machine
translation (Sutskever et al., [2014](#bib.bib34); Cho et al., [2014](#bib.bib8); Bahdanau et al., [2015](#bib.bib3); Wu et al., [2016](#bib.bib41))
and speech synthesis (Oord et al., [2016](#bib.bib28); Arik et al., [2017](#bib.bib2); Wang et al., [2017](#bib.bib39)). Together
with their success is the growth in size and computational requirements of
training and inference. Currently, a typical approach to address these
requirements is to use a heterogeneous distributed environment with a mixture
of many CPUs and GPUs. In this environment, it is a common practice for a
machine learning practitioner to specify the device placement for certain
operations in the neural network. For example, in a neural translation
network, each layer, including all LSTM layers, the attention layer, and the
softmax layer, is computed by a GPU (Sutskever et al., [2014](#bib.bib34); Wu et al., [2016](#bib.bib41)).
Although such decisions can be made by machine learning practitioners, they can
be challenging, especially when the network has many branches (Szegedy et al., [2016](#bib.bib36)),
or when the minibatches get larger. Existing algorithmic
solvers (Pellegrini, [2009](#bib.bib31); Karypis & Kumar, [1995b](#bib.bib21)), on the other hand, are not flexible
enough to work with a dynamic environment with many interferences.

Figure 1: An overview of the RL based device placement model.
In this paper, we propose a method which learns to optimize device placement
for training and inference with neural networks. The method, illustrated in
Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Device Placement Optimization with Reinforcement Learning"), takes into account information of the environment by
performing series of experiments to understand which parts of the model should
be placed on which device, and how to arrange the computations so that the
communication is optimized. Key to our method is the use of a
sequence-to-sequence model to read input information about the operations as
well as the dependencies between them, and then propose a placement for each
operation. Each proposal is executed in the hardware environment to measure
the execution time. The execution time is then used as a reward signal to
train the recurrent model so that it gives better proposals over time.
Our main result is that our method finds non-trivial placements on multiple
devices for Inception-V3 (Szegedy et al., [2016](#bib.bib36)), Recurrent Neural Language
Model (Zaremba et al., [2014](#bib.bib42); Jozefowicz et al., [2016](#bib.bib19)) and Neural Machine
Translation (Sutskever et al., [2014](#bib.bib34); Wu et al., [2016](#bib.bib41)). Single-step measurements show
that Scotch (Pellegrini, [2009](#bib.bib31)) yields disappointing results on all three
benchmarks, suggesting that their graph-based heuristics are not flexible
enough for them. Our method can find non-trivial placements that are up to
3.5 times faster. When applied to train the three models in real time, the
placements found by our method are up to 20% faster than human experts’
placements.
2 Related Work
---------------
Our work is closely related to the idea of using neural networks and
reinforcement learning for combinatorial
optimization (Vinyals et al., [2015](#bib.bib38); Bello et al., [2016](#bib.bib5)). The space of possible
placements for a computational graph is discrete, and we model the placements
using a sequence-to-sequence approach, trained with policy gradients. However,
experiments in early work were only concerned with toy datasets, whereas this
work applies the framework to a large-scale practical application with noisy
rewards.
Reinforcement learning has also been applied to optimize system performance.
For example, Mao et al. ([2016](#bib.bib27)) propose to train a resource management
algorithm with policy gradients. However, they optimize the expected value of a
hand-crafted objective function based on the reward, unlike this work, where we
optimize directly for the running time of the configurations, hence relieving
the need to design intermediate cost models.
Graph partitioning is an intensively studied subject in computer science. Early
work such as
Kernighan & Lin ([1970](#bib.bib22)); Kirkpatrick et al. ([1983](#bib.bib25)); Fiduccia & Mattheyses ([1988](#bib.bib10)); Johnson et al. ([1989](#bib.bib18))
employ several iterative refinement procedures that start from a partition and
continue to explore similar partitions to improve. Alternative methods such as
Hagen & Kahng ([1992](#bib.bib12)); Karypis & Kumar ([1995b](#bib.bib21)) perform spectral analyses on matrix
representations of graphs to partition them. Despite their extensive
literature, graph partitioning algorithms remain heuristics for computational
graphs. The reason is that in order to apply these algorithms, one has to
construct cost models for the graphs of concern. Since such models are
expensive to even estimate and in virtually all cases, are not accurate, graph
partitioning algorithms applied on them can lead to unsatisfying results, as we
show in Section [4](#S4 "4 Experiments ‣ Device Placement Optimization with Reinforcement Learning") of this paper.
A well-known graph partitioning algorithm with an open source software library
is the Scotch optimizer (Pellegrini, [2009](#bib.bib31)), which we use as a baseline in our
experiments. The Scotch mapper attempts to balance the computational load of a
collection of tasks among a set of connected processing nodes, while reducing
the cost of communication by keeping intensively communicating tasks on nearby
nodes. Scotch relies on a collection of graph partitioning techniques such as
k-way Fiduccia-Mattheyses (Fiduccia & Mattheyses, [1988](#bib.bib10)), multilevel
method (Barnard & Simon, [1994](#bib.bib4); Hendrickson & Leland, [1993](#bib.bib15); Karypis & Kumar, [1995a](#bib.bib20)), band
method (Chevalier & Pellegrini, [2006](#bib.bib7)), diffusion method (Pellegrini, [2007](#bib.bib30)), and dual
recursive bipartitioning mapping (Pellegrini & Roman, [1996](#bib.bib32))).
Scotch models the problem with 2 graphs. The first graph is called the target
architecture graph, whose vertices represent hardware resources such as CPUs or
GPUs and whose edges represent the communication paths available between them,
such as a PCIe bus or a network link. The second graph is called the source
graph, which models the computation to be mapped onto the target architecture
graph. In the case of TensorFlow (Abadi et al., [2016](#bib.bib1)), the computations of
programs are modeled as a graph whose vertices represent operations, while the
graph edges represent the multidimensional data arrays (tensors) communicated
between them. Scotch users have to choose how and when given partitioning
should be applied to graphs. However, in our experiment, we rely on the
software’s default strategies implemented in Scotch, which have already been
extensively tuned.
3 Method
---------
Consider a TensorFlow computational graph G, which consists of M
operations {o1,o2,...,oM}, and a list of D available devices. A
placement P={p1,p2,...,pM} is an assignment of
an operation oi∈G to a device pi, where pi∈{1,...,D}. Let
r(P) denote the time that it takes to perform a complete execution of
G under the placement P. The goal of *device
placement optimization* is to find P such that the execution time
r(P) is minimized.
###
3.1 Training with Policy Gradients

Figure 2: Device placement model architecture.
While we seek to minimize the execution time r(P), direct
optimization of r(P) results in two major issues. First, in the
beginning of the training process, due to the bad placements sampled, the
measurements of r(P) can be noisy, leading to inappropriate
learning signals. Second, as the RL model gradually converges, the placements
that are sampled become more similar to each other, leading to small
differences between the corresponding running times, which results in less
distinguishable training signals. We empirically find that the square root of running time, R(P)=√r(P), makes the learning process more robust. Accordingly, we
propose to train a stochastic policy
π(P|G;θ) to minimize the objective
| | | | |
| --- | --- | --- | --- |
| | J(θ)=EP∼π(P|G;θ)[R(P)|G] | | (1) |
In our work, π(P|G;θ) is defined by an
attentional sequence-to-sequence model, which we will describe in
Section [3.2](#S3.SS2 "3.2 Architecture Details ‣ 3 Method ‣ Device Placement Optimization with Reinforcement Learning"). We learn the network parameters using
Adam (Kingma & Ba, [2014](#bib.bib24)) optimizer based on policy gradients computed via the
REINFORCE equation (Williams, [1992](#bib.bib40)),
| | | | |
| --- | --- | --- | --- |
| | ∇θJ(θ)=EP∼π(P|G;θ)[R(P)⋅∇θlogp(P|G;θ)] | | (2) |
We estimate ∇θJ(θ) by drawing K placement samples using
Pi∼π(⋅|G;θ). We reduce the variance
of policy gradients by using a baseline term B, leading to
| | | | |
| --- | --- | --- | --- |
| | | | (3) |
We find that a simple moving average baseline B works well in our
experiments. In practice, on computational graphs with large memory
footprints, some placements can fail to execute, e.g., putting all of the
operations of a huge LSTM on a single GPU will exceed the device’s memory
limit. For such cases, we set the square root of running time R(P)
to a large constant, which we call the failing signal. We specify the failing
signal manually depending on the input graph. We observe that throughout our
training process, some placements sporadically and unexpectedly fail, perhaps
due to factors such as the state of the machine (we train our model on a shared
cluster). This phenomenon is particularly undesirable towards the end of the
training process, since a large difference between R(Pi) and the
baseline B leads to a large update of the parameters, which potentially
perturbs parameters θ out of a good minimum. We thus hard-code the
training process so that after 5,000 steps, one performs a parameter update
with a sampled placement P only if the placement executes. In our
experiments, we also find that initializing the baseline B with the failing
signal results in more exploration.
###
3.2 Architecture Details
We use a sequence-to-sequence model (Sutskever et al., [2014](#bib.bib34)) with
LSTM (Hochreiter & Schmidhuber, [1997](#bib.bib17)) and a content-based attention
mechanism (Bahdanau et al., [2015](#bib.bib3)) to predict the placements.
Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Training with Policy Gradients ‣ 3 Method ‣ Device Placement Optimization with Reinforcement Learning") shows the overall architecture of our model, which can
be divided into two parts: encoder RNN and decoder RNN.
The input to the encoder RNN is the sequence of operations of the input graph.
We embed the operations by concatenating their information. Specifically, for
each input graph G, we first collect the types of its
operations. An operation’s type describes the underlying computation, such as
MatMul or conv2d. For each type, we store a tunable embedding
vector. We then record the size of each operation’s list of output tensors and
concatenate them into a fixed-size zero-padded list called the output shape. We
also take the one-hot encoding vector that represents the operations that are
direct inputs and outputs to each operation. Finally, the embedding of each
operation is the concatenation of its type, its output shape, and its
one-hot encoded adjacency information.
The decoder is an attentional LSTM (Bahdanau et al., [2015](#bib.bib3)) with a fixed
number of time steps that is equal to the number of operations in a graph
G. At each step, the decoder outputs the device for the operation
at the same encoder time step. Each device has its own tunable embedding,
which is then fed as input to the next decoder time step.
###
3.3 Co-locating Operations
A key challenge when applying our method to TensorFlow computational graphs is
that these graphs generally have thousands of operations (see
Table [1](#S4.T1 "Table 1 ‣ Co-location groups. ‣ 4.1 Experiment Setup ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning")). Modeling such a large number of operations with
sequence-to-sequence models is difficult due to vanishing and exploding
gradient issues (Pascanu et al., [2013](#bib.bib29)) and large memory footprints. We propose to
reduce the number of objects to place on different devices by manually forcing
several operations to be located on the same device. In practice, this is
implemented by the colocate\_with feature of TensorFlow.
We use several heuristics to create co-location groups. First, we rely on
TensorFlow’s default co-location groups, such as co-locating each operation’s
outputs with its gradients. We further apply a simple heuristic to merge more
operations into co-location groups. Specifically, if the output of an operation
X is consumed only by another operation Y, then operations X and
Y are co-located. Many initialization operations in TensorFlow can be grouped
in this way. In our experiments, we apply this heuristic recursively, and after
each iteration, we treat the co-location groups as operations until there are
not any further groups that can be merged. For certain models, we apply
specific rules to construct co-location groups. For example, with ConvNets, we
can treat several convolutions and pooling layers as a co-location group, and
with RNN models, we treat each LSTM cell as a group.
###
3.4 Distributed Training
We speed up the training process of our model using asynchronous distributed
training, as shown in Figure [3](#S3.F3 "Figure 3 ‣ 3.4 Distributed Training ‣ 3 Method ‣ Device Placement Optimization with Reinforcement Learning"). Our framework consists of
several controllers, each of which execute the current policy defined
by the attentional sequence-to-sequence model as described in
Section [3.2](#S3.SS2 "3.2 Architecture Details ‣ 3 Method ‣ Device Placement Optimization with Reinforcement Learning"). All of the controllers interact with a single shared
parameter server. We note that the parameter server holds only the controllers’
parameters, and not the input graph’s parameters, because keeping the input
graph’s parameters on the parameter server can potentially create a latency
bottleneck to transfer these parameters. Each controller in our framework
interacts with K workers, where K is the number of Monte Carlo samples in
Equation [3](#S3.E3 "(3) ‣ 3.1 Training with Policy Gradients ‣ 3 Method ‣ Device Placement Optimization with Reinforcement Learning").

Figure 3: Distributed and asynchronous parameter update
and reward evaluation.
The training process has two alternating phases. In the first phase, each
worker receives a signal that indicates that it should wait for placements from
its controller, while each controller receives a signal that indicates it
should sample K placements. Each sampled placement comes with a probability.
Each controller then independently sends the placements to their workers, one
placement per worker, and sends a signal to indicate a phase change.
In the second phase, each worker executes the placement it receives and
measures the running time. To reduce the variance in these measurements, each
placement is executed for 10 steps and the average running time of the steps
but the first one is recorded. We observe that in TensorFlow, the first step
can take longer to execute compared to the following steps, and hence we treat
itss runing time as an outlier. Each controller waits for all of its workers to
finish executing their assigned placements and returning their running times.
When all of the running times are received, the controller uses the running
times to scale the corresponding gradients to asynchronously update the
controller parameters that reside in the parameter server.
In our experiments, we use up to 20 controllers, each with either 4 or 8
workers. Under this setting, it takes between 12 to 27 hours to find the
best placement for the models in our experiments. Using more workers per
controller yields more accurate estimates of the policy gradient as in
Equation [3](#S3.E3 "(3) ‣ 3.1 Training with Policy Gradients ‣ 3 Method ‣ Device Placement Optimization with Reinforcement Learning"), but comes at the expense of possibly
having to put more workers in idle states. We also note that due to the
discrepancies between machines, it is more stable to let each controller have
its own baseline.
4 Experiments
--------------
In the following experiments, we apply our proposed method to assign
computations to devices on three important neural networks in the deep learning
literature: Recurrent Neural Language Model (RNNLM) (Zaremba et al., [2014](#bib.bib42); Jozefowicz et al., [2016](#bib.bib19)),
Attentional Neural Machine Translation (Bahdanau et al., [2015](#bib.bib3)), and
Inception-V3 (Szegedy et al., [2016](#bib.bib36)). We compare the RL placements against strong
existing baselines described in Section [4.2](#S4.SS2 "4.2 Baselines ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning").
###
4.1 Experiment Setup
#### Benchmarks.
We evaluate our approach on three established deep
learning models:
* Recurrent Neural Network Language Model (RNNLM) with multiple LSTM
layers (Zaremba et al., [2014](#bib.bib42); Jozefowicz et al., [2016](#bib.bib19)). The grid structure of this model introduces
tremendous potential for parallel executions because each LSTM cell can
start as soon as its input and previous states are available.
* Neural Machine Translation with attention mechanism
(NMT) (Bahdanau et al., [2015](#bib.bib3); Wu et al., [2016](#bib.bib41)). While the architecture of this
model is similar to that of RNNLM, its large number of hidden states due to
the source and target sentences necessitates model parallelism.
Both Sutskever et al. ([2014](#bib.bib34)) and Wu et al. ([2016](#bib.bib41)) propose to place each
LSTM layer, the attention layer, and the softmax layer, each on a separate
device. While the authors observe significant improvements at training
time, their choices are not optimal. In fact, we show in
our experiments that a trained policy can find significantly better
placements.
* Inception-V3 (Szegedy et al., [2016](#bib.bib36)) is a widely-used architecture for
image recognition and visual feature extraction (Khetan & Oh, [2016](#bib.bib23); Esteva et al., [2016](#bib.bib9)).
The Inception network has
multiple blocks. Each block has several branches of convolutional and
pooling layers, which are then concatenated to make the inputs for the next
block. While these branches can be executed in parallel, the network’s
depth restricts such potential since the later blocks have to wait for the
previous ones.
#### Model details.
For Inception-V3, each step is executed on a batch
of images, each of size 299×299×3, which is the widely-used
setting for the ImageNet Challenge (Szegedy et al., [2015](#bib.bib35)). For RNNLM and NMT,
we use the model with 2 LSTM layers, with sizes of 2048 and 1024,
respectively. We set the number of unrolling steps for RNNLM, as well as the
maximum length for the source and target sentences of NMT, to 40. Each pass
on RNNLM and NMT consists of a minibatch of 64 sequences.
#### Co-location groups.
We pre-process the TensorFlow computational
graphs of the three aforementioned models to manually create their co-location
groups. More precisely; for RNNLM and NMT, we treat each LSTM cell, each
embedding lookup, each attention step and each softmax prediction step as a
group; for Inception-V3, we treat each branch as a group.
Table [1](#S4.T1 "Table 1 ‣ Co-location groups. ‣ 4.1 Experiment Setup ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning") shows the grouping statistics of these models.
| Model | #operations | #groups |
| --- | --- | --- |
| RNNLM | 8943 | 188 |
| NMT | 22097 | 280 |
| Inception-V3 | 31180 | 83 |
Table 1: Model statistics.
| Tasks | Single-CPU | Single-GPU | #GPUs | Scotch | MinCut | Expert | RL-based | Speedup |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| RNNLM | 6.89 | 1.57 | 2 | 13.43 | 11.94 | 3.81 | 1.57 | 0.0% |
| (batch 64) | 4 | 11.52 | 10.44 | 4.46 | 1.57 | 0.0% |
| NMT | 10.72 | OOM | 2 | 14.19 | 11.54 | 4.99 | 4.04 | 23.5% |
| (batch 64) | 4 | 11.23 | 11.78 | 4.73 | 3.92 | 20.6% |
| Inception-V3 | 26.21 | 4.60 | 2 | 25.24 | 22.88 | 11.22 | 4.60 | 0.0% |
| (batch 32) | 4 | 23.41 | 24.52 | 10.65 | 3.85 | 19.0% |
Table 2: Running times (in seconds) of placements found by
RL-based method and the baselines (lower is better). For each model, the
first row shows the results with 1 CPU and 2 GPUs; the second row shows the
results with 1 CPU and 4 GPUs. Last column shows improvements in running time
achieved by RL-based placement over fastest baseline. To reduce variance,
running times less than 10 seconds are measured 15 times and the averages
are recorded. OOM is Out Of Memory.
#### Metrics.
We implement training operations for RNNLM and NMT using
Adam (Kingma & Ba, [2014](#bib.bib24)), and for Inception-V3 using RMSProp (Tieleman & Hinton, [2012](#bib.bib37)). We
evaluate a placement by the total time it takes to perform one forward pass,
one backward pass and one parameter update. To reduce measurement variance, we
average the running times over several trials. Additionally, we train each
model from scratch using the placements found by our method and compare the
training time to that of the strongest baseline placement.
#### Devices.
In our experiments, the available devices are 1 Intel
Haswell 2300 CPU, which has 18 cores, and either 2 or 4 Nvidia Tesla K80 GPUs.
We allow 50 GB of RAM for all models and settings.
###
4.2 Baselines
| | |
| --- | --- |
| RL-based placement of Neural MT graph.
Top: encoder, Bottom: decoder. Devices are denoted by colors, where the
transparent color represents an operation on a CPU and each other unique
color represents a different GPU. This placement achieves an improvement of
| RL-based placement of Neural MT graph.
Top: encoder, Bottom: decoder. Devices are denoted by colors, where the
transparent color represents an operation on a CPU and each other unique
color represents a different GPU. This placement achieves an improvement of
|
Figure 4: RL-based placement of Neural MT graph.
Top: encoder, Bottom: decoder. Devices are denoted by colors, where the
transparent color represents an operation on a CPU and each other unique
color represents a different GPU. This placement achieves an improvement of
19.3% in running time compared to the fine-tuned expert-designed placement.
#### Single-CPU.
This placement executes the whole neural network on a
single CPU. Processing some large models on GPUs is infeasible due to memory
limits, leaving Single-CPU the only choice despite being slow.
#### Single-GPU.
This placement executes the whole neural network on a
single CPU. If an operation lacks GPU implemention, it will be placed on CPU.
#### Scotch.
We estimate the computational costs of each operation as
well as the amount of data that flows along each edge of the neural network
model, and feed them to the Scotch static mapper (Pellegrini, [2009](#bib.bib31)). We also
annotate the architecture graph (see Section [2](#S2 "2 Related Work ‣ Device Placement Optimization with Reinforcement Learning")) with compute
and communication capacities of the underlying devices.
#### MinCut.
We use the same Scotch optimizer, but eliminate the CPU
from the list of available devices fed to the optimizer. Similar to the
single-GPU placement, if an operation has no GPU implementation, it runs on the
CPU.
#### Expert-designed.
For RNNLM and NMT, we put each LSTM layer on a
device. For NMT, we also put the attention mechanism and the softmax layer on
the same device with the highest LSTM layer, and we put the embedding layer on
the same device with the first LSTM layer. For Inception-V3, the common
practice for the batch size of 32 is to put the entire model on a single GPU.
There is no implementation of Inception-V3 with batch 32 using more than 1 GPU.
To create an intuitive baseline on multiple GPUs, we heuristically partition
the model into contiguous parts that have roughly the same number of layers. We
compare against this approach in Section [4.3](#S4.SS3 "4.3 Single-Step Runtime Efficiency ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning"). The common practice
for Inception-V3 with the larger batch size of 128 is to apply data
parallelism using 4 GPUs. Each GPU runs a replica of the model and processes
a batch of size 32 (Szegedy et al., [2016](#bib.bib36)). We compare against this approach in
Section [4.4](#S4.SS4 "4.4 End-to-End Runtime Efficiency ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning").
###
4.3 Single-Step Runtime Efficiency
In Table [2](#S4.T2 "Table 2 ‣ Co-location groups. ‣ 4.1 Experiment Setup ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning"), we present the per-step running times of the
placements found by our method and by the baselines. We observe that our model
is either on par with or better than other methods of placements. Despite
being given no information other than the running times of the placements and
the number of available devices, our model learns subtle tradeoffs between
performance gain by parallelism and the costs induced by inter-device
communications.
#### Rnnlm.
Our method detects that it is possible to fit the whole
RNNLM graph into one GPU, and decides to do so to save the inter-device
communication latencies. The resulting placement is more than twice faster than
the best published human-designed baseline.
#### Neural MT.
Our method finds a non-trivial placement (see
Figure [4](#S4.F4 "Figure 4 ‣ 4.2 Baselines ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning")) that leads to a speedup of up to 20.6% for
4 GPUs. Our method also learns to put the less computational expensive
operations, such as embedding lookups, on the CPU. We suspect that whilst
being the slowest device, the CPU can handle these lookup operations (which are
less computationally expensive than other operations) to reduce the load for
other GPUs.
#### Inception-V3.
For Inception-V3 with the batch size of 32,
RL-based placer learns that when there are only 2 GPUs available, the degree
of freedom for model parallelism is limited. It thus places all the operations
on a single GPU (although it could use 2 GPUs). However, when 4 GPUs are
available, the RL-based placer finds an efficient way to use all of the GPUs,
reducing the model’s per-step running time from 4.60 seconds to 3.85
seconds. This result is significant, as neither of our baselines could find a
placement better than assigning all the operations to a single GPU.
We also conduct a simple extension of our experiments, by increasing the batch
sizes of RNNLM and NMT to 256, and their LSTM sizes to 4,096 and 2,048,
respectively. This makes the models’ memory footprints so large that even one
layer of them cannot be fitted into any single device, hence ruling out the
human-designed placement. Nevertheless, after several steps of finding
placements that fail to run, our approach manages to find a way to successfully
place input models on devices The running times of the placements found for
large RNNLM and NMT are 33.46 and 35.84 seconds, respectively.
###
4.4 End-to-End Runtime Efficiency
We now investigate whether the RL-based placements can speedup not only the
single-step running time but also the entire training process.

Figure 5: RL-based placement of Inception-V3.
Devices are denoted by colors, where the transparent color represents an
operation on a CPU and each other unique color represents a different GPU.
RL-based placement achieves the improvement of 19.7% in running time
compared to expert-designed placement.
#### Neural MT.
We train our Neural MT model on the WMT14 English-German
dataset.111<http://www.statmt.org/wmt14/> For these experiments, we
pre-process the dataset into word pieces (Wu et al., [2016](#bib.bib41)) such that the vocabularies
of both languages consist of 32,000 word pieces. In order to match our
model’s settings, we consider only the translation pairs where no sentence has
more than 40 word pieces. We train each model for 200,000 steps and record
their train perplexities. Each training machine has 4 Nvidia Tesla K80 GPUs
and 1 Intel Haswell 2300 CPU. Since there are inevitable noises in the
computer systems when measuring the running times, we train each model 4
times independently and average their per-step running times and perplexities.

Figure 6: Training curves of NMT model using RL-based
placement and expert-designed placement. The per-step running time as well as
the perplexities are averaged over 4 runs.
The RL-based placement runs faster than the expert-designed placement, as shown
in the training curves in Figure [6](#S4.F6 "Figure 6 ‣ Neural MT. ‣ 4.4 End-to-End Runtime Efficiency ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning"). Quantitatively, the
expert-designed placement, which puts each layer (LSTM, attention and softmax)
on a different GPU, takes 229.57 hours; meanwhile the RL-based placement (see
Figure [4](#S4.F4 "Figure 4 ‣ 4.2 Baselines ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning")) takes 165.73 hours, giving 27.8% speed
up of total training time. We note that the measured speedup rate (and the
running times) of these models appear different than reported in
Table [2](#S4.T2 "Table 2 ‣ Co-location groups. ‣ 4.1 Experiment Setup ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning") because measuring them in our RL method has several
overheads.
#### Inception-V3.
We train Inception-V3 on the ImageNet
dataset (Russakovsky et al., [2015](#bib.bib33)) until the model reaches the accuracy of 72% on the
validation set. In practice, more often, inception models are trained with data
parallelism rather than model parallelism. We thus compare the placements found
by our algorithm (see Figure [5](#S4.F5 "Figure 5 ‣ 4.4 End-to-End Runtime Efficiency ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning")) against two such
baselines.
The first baseline, called Asynchronous towers, puts one replica of the
Inception-V3 network on each GPU. These replicas share the data reading
operations, which are assigned to the CPU. Each replica independently performs
forward and backward passes to compute the model’s gradients with respect to a
minibatch of 32 images and then updates the parameters asynchronously. The
second baseline, called Synchronous Tower, is the same as Asynchronous towers,
except that it waits for the gradients of all copies before making an update.
All settings use the learning rate of 0.045 and are trained using RMSProp.

Figure 7: Training curves of Inception-V3 model
using RL-based placement and two expert-designed placements: Synchronous
towers and Asynchronous towers. The per-step running time as well as the
perplexities are averaged over 4 runs.
Figure [7](#S4.F7 "Figure 7 ‣ Inception-V3. ‣ 4.4 End-to-End Runtime Efficiency ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning") shows the training curves of the three
settings for Inception-V3. As can be seen from the figure, the end-to-end
training result confirms that the RL-based placement indeed speedups the
training process by 19.7% compared to the Synchronous Tower. While
Asynchronous towers gives a better per-step time, synchronous approaches lead to
faster convergence. The training curve of the RL-based placement, being slower
at first, eventually crosses the training curve of Asynchronous towers.
###
4.5 Analysis of Found Placements
In order to understand the rationale behind the RL-based placements, we analyze
their profiling information and compare them against those of expert-designed
placements.

Figure 8: Computational load profiling of NMT model
for RL-based and expert-designed placements. Smaller blocks of each color
correspond to feedforward path and same-color upper blocks correspond to
backpropagation. RL-based placement performs a more balanced computational
load assignment than the expert-designed placement.
#### Neural MT.
We first compare the per-device computational loads by
RL-based placement and expert-designed placement for the NMT model.
Figure [8](#S4.F8 "Figure 8 ‣ 4.5 Analysis of Found Placements ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning") shows such performance profiling. RL-based
placement balances the workload significantly better than does the
expert-designed placement. Interestingly, if we do not take into account the
time for back-propagation, then expert-designed placement makes sense because
the workload is more balanced (whilst still less balanced than ours). The
imbalance is much more significant when back-propagation time is considered.
| | |
| --- | --- |
| Computational load and memory
copy profiling of Inception-V3 for RL-based and Synchronous tower
placements. Top figure: Operation runtime for GPUs. Smaller blocks of each
color correspond to feedforward path and same-color upper blocks correspond
to backpropagation. RL-based placement produces less balanced computational
load than Synchronous tower. Bottom figure: Memory copy time. All memory
copy activities in Synchronous tower are between a GPU and a CPU, which
are in general slower than GPU to GPU copies that take place in the
RL-based placement. | Computational load and memory
copy profiling of Inception-V3 for RL-based and Synchronous tower
placements. Top figure: Operation runtime for GPUs. Smaller blocks of each
color correspond to feedforward path and same-color upper blocks correspond
to backpropagation. RL-based placement produces less balanced computational
load than Synchronous tower. Bottom figure: Memory copy time. All memory
copy activities in Synchronous tower are between a GPU and a CPU, which
are in general slower than GPU to GPU copies that take place in the
RL-based placement. |
Figure 9: Computational load and memory
copy profiling of Inception-V3 for RL-based and Synchronous tower
placements. Top figure: Operation runtime for GPUs. Smaller blocks of each
color correspond to feedforward path and same-color upper blocks correspond
to backpropagation. RL-based placement produces less balanced computational
load than Synchronous tower. Bottom figure: Memory copy time. All memory
copy activities in Synchronous tower are between a GPU and a CPU, which
are in general slower than GPU to GPU copies that take place in the
RL-based placement.
#### Inception-V3.
On Inception-V3, however, the RL-based placement does
not seek to balance the computations between GPUs, as illustrated in
Figure [9](#S4.F9 "Figure 9 ‣ Neural MT. ‣ 4.5 Analysis of Found Placements ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning")-top. We suspect this is because
Inception-V3 has more dependencies than NMT, allowing less room for model
parallelism across GPUs. The reduction in running time of the RL-based
placement comes from the less time it spends copying data between devices, as
shown in Figure [9](#S4.F9 "Figure 9 ‣ Neural MT. ‣ 4.5 Analysis of Found Placements ‣ 4 Experiments ‣ Device Placement Optimization with Reinforcement Learning")-bottom. In particular, the
model’s parameters are on the same device as the operations that use them,
unlike in Synchronous tower, where all towers have to wait for all parameters
have to be updated and sent to them. On the contrary, that use them to reduce
the communication cost, leading to overall reduction in computing time.
5 Conclusion
-------------
In this paper, we present an adaptive method to optimize device placements for
neural networks. Key to our approach is the use of a sequence-to-sequence model
to propose device placements given the operations in a neural network. The
model is trained to optimize the execution time of the neural network. Besides
the execution time, the number of available devices is the only other
information about the hardware configuration that we feed to our model.
Our results demonstrate that the proposed approach learns the properties of the
environment including the complex tradeoff between computation and
communication in hardware. On a range of tasks including image classification,
language modeling, and machine translation, our method surpasses placements
carefully designed by human experts and highly optimized algorithmic solvers.
Acknowledgements
----------------
We thank Martin Abadi, Stephan Gouws, and the Google Brain team for their help
with the project.
|
283eb0ac-0b7c-419b-a9d7-082f5e386cca
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Research speedruns
The 'research speedrun' is a format that I've been playing with on my blog for the last year or so. It's been more popular than I expected and it looks like there's a lot more that could be done with the idea. So I thought I'd write it up here and see if anyone else wants to experiment with it themselves, or suggest different things to try.
The format
It's a very simple format, so this section will be short:
* Pick a topic
* Set a one hour timer
* Find out as much as possible about the topic before the buzzer goes off while writing up a live commentary
* Do a very quick editing pass to fix the worst typos and then hit Publish
So far I've done speedruns on Marx on alienation, the Vygotsky Circle, sensemaking, the Prussian education system, abacus schools, Germaine de Staël, and mess.
What I've used it for so far
Obviously, there's only so much you can learn in an hour - calling this 'research' is a little bit of a stretch. Sometimes I don't even manage to leave Wikipedia! Even so, this technique works well for topics where the counterfactual is 'I don't read anything at all' or 'I google around aimlessly for half an hour and then forget it all'. Writing notes as I go means that I'm making enough active effort that I end up remembering some of it, but I know the process is timeboxed so it's not going to end up being one of those annoying ever-expanding writing projects.
Here are a few rough categories of topics I've tried so far:
* 'Sidequests'. Speedruns are great for topics that you find interesting but are never going to devote serious time to. I have a very minor side interest in the history of schools and universities, so if I come across something intriguing, like Renaissance abacus schools, it's a good way to learn a few basic things quickly. I have one or two more ideas for speedruns in this area.
* Historical background. An hour is quite a good length of time to pick up a few fragments of background historical context for something you're inter
|
9f65755a-943c-4440-ace0-7db0e201db99
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Hiking in Vancouver, Canada
Discussion article for the meetup : Hiking in Vancouver, Canada
WHEN: 09 August 2011 06:00:00PM (-0700)
WHERE: Grouse Mountain, British Columbia, Canada
We're hiking at Grouse Mountain on Tuesday evening from 6:30pm. The best way to get there seems to be taking the seabus to North Vancouver and then catching the 236 from there.
There is a free (if you pay for the mountain pass or have a annual pass) transit (shuttle bus) that goes from downtown to Grouse. Pickup/drop-off points (for the shuttle bus) are Canada Place, the Hyatt Regency and the Blue Horizon Hotel.
I'm going to be driving there from Joyce St in Collingwood from about 5:30pm, so get in touch (michael dot keenan AT gmail dot com or +1 650 283 9013) if you'd like to be picked up on the way.
To hear more last minute details, join the mailing list.
Discussion article for the meetup : Hiking in Vancouver, Canada
|
f8e2774d-e306-4b5d-a0db-6798c57129dd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Histocracy: Open, Effective Group Decision-Making With Weighted Voting
The following is slightly edited from a pitch I wrote for a general audience. I've added blog-specific content afterwards.
----------------------------------------
Information technology allows for unprecedented levels of collaboration and debate. When an issue arises people communicate freely, are held accountable for misrepresentations, and are celebrated for cogent analysis. We share information and opinion better than ever before. And then we leave the actual decision up to one person, or a tiny committee, or a poll of a population that for the most part wasn't paying attention, or at best an unpredictably irrational market. The one thing we still don't aggregate in a sophisticated way is human judgment.
Organizations evolve complex decision-making structures because variance in human judgement is complicated. We try to put the most competent person in charge—but there is wisdom in crowds, and so a wise leader gets buy-in from a broad pool of competent subordinates. We must constantly try to evaluate who has the best record, to see who's been right in the past...and we get it wrong all the time. We overestimate our own competence. In hindsight, we misremember the right decision as being obvious. We trust the man with the better hair. Any organization with group buy-in on decisions amasses a solid amount of data on the competence of its members, but it does not curate or use this data effectively.
We can do better, using current technology, some simple software, and some relatively simple math. The solution is called histocracy. It is most easily explained with a use case.
The H Foundation is a hypothetical philanthropic organization, with a board of twelve people overseeing a large fund. Each year, they receive and review several hundred grant applications, and choose a few applicants to give money to. Sometimes these applicants use the money effectively, and sometimes they fail. Often an applicant they turn down will get funding elsewhere and experience
|
e4aa1c11-8e18-40c9-bedf-7ff01de2d18e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
SAE sparse feature graph using only residual layers
Does it make sense to extract sparse feature graph for a behavior from only residual layers of gpt2 small or do we need all mlp and attention as well?
|
1a1b995e-4d4f-4d12-b580-34c280db40dc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Quotes from the Stargate press conference
Present alongside President Trump:
* Sam Altman (who President Trump introduces as "by far the leading expert" on AI)
* Larry Ellison (Oracle executive chairman and CTO)
* Masayoshi Son (Softbank CEO who believes he was born to realize ASI)
> President Trump: What we want to do is we want to keep [AI datacenters] in this country. China is a competitor and others are competitors.
> President Trump: I'm going to help a lot through emergency declarations because we have an emergency. We have to get this stuff built. So they have to produce a lot of electricity and we'll make it possible for them to get that production done very easily at their own plants if they want, where they'll build at the plant, the AI plant they'll build energy generation and that will be incredible.
> President Trump: Beginning immediately, Stargate will be building the physical and virtual infrastructure to power the next generation of advancements in AI, and this will include the construction of colossal data centers
> Larry Ellison: The first of them are under construction in Texas. Each building is a half a million square feet. There are 10 buildings currently, currently being built, but that will expand to 20 and other locations beyond the Abilene location, which is, which is our first location.
> Masayoshi Son: Mr. President, uh, last, last month I came to celebrate your winning and promised that we will invest $100 billion and you told me, oh Masa, go for $200. Now I came back with 500.
> Masayoshi Son: I think AGI is coming very, very soon, and after that, that's not the goal. After that, artificial superintelligence will come to solve the issues that mankind would never ever have thought that we could solve. Well, this is the beginning of our golden age. Thank you very much.
> Sam Altman: I don't have too much to add, but I do want to say I'm thrilled we get to do this in the United States of America. I think this will be the most important project of this
|
41c8cad9-5367-435b-9825-922b3071d728
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
AI Forecasting Dictionary (Forecasting infrastructure, part 1)
*This post introduces the [AI Forecasting Dictionary](https://parallel-forecast.github.io/AI-dict/), an open-source set of standards and conventions for precisely interpreting AI and auxiliary terms. It is the first part in a series of blog posts which motivate and introduce pieces of infrastructure intended to improve our ability to forecast novel and uncertain domains like AI.*
*The Dictionary is currently in beta, and we're launching early to get feedback from the community and quickly figure out how useful it is.*
### Background and motivation
**1) Operationalisation is an unsolved problem in forecasting**
A key challenge in (AI) forecasting is to write good questions. This is tricky because we want questions which *both* capture important uncertainties, *and* are sufficiently concrete that we can resolve them and award points to forecasters in hindsight. For example:
> Will there be a slow take-off?
is a question that’s important yet too vague.
> Will there be a 4-year doubling of world output before the first 1-year doubling of world output?
is both important and concrete, yet sufficiently far-out that it’s unclear if standard forecasting practices will be helpful in resolving it.
> Will there be a Starcraft II agent by the end of 2020 which is at least as powerful as AlphaStar, yet uses <$10.000 of publicly available compute?
is more amenable to standard forecasting practices, but at the cost of being only tangentially related to the high-level uncertainty we initially cared about.
And so on.
Currently, forecasting projects reinvent this wheel of operationalisation all the time. There’s usually idiosyncratic and time-consuming processes of writing and testing questions (this might take many hours for a single question) [1], and best practices tend to evolve organically but without being systematically recorded and built upon [2].
**2) The future is big, and forecasting it might require answering *a lot* of questions**
This is an empirical claim which we’ve become more confident by working in this space over the last year.
One way of seeing this is by attempting to break down an important high-level question into pieces. Suppose we want to get a handle on AI progress by investigating key inputs. We might branch those into progress on hardware, software, and data (including simulations). We might then branch hardware into economics and algorithmic parallelizability. To understand the economics, we must branch it into the supply and demand side, and we must then branch each of those to understand how they interface with regulation and innovation. This involves thousands of actors across academia, industry and government, and hundreds of different metrics for tracking progress of various kinds. And we’ve only done a brief depth-first search on one of the branches of the hard-software-data tree, which in turn is just one way of approaching the AI forecasting problem.
Another way of guesstimating this: [the AI Impacts archives](https://aiimpacts.org/archives/) contains roughly 140 articles. Suppose this is 10% of the number of articles they’d need to accomplish their mission. If they each contains 1-30 uncertain claims that we’d ideally like to gather estimates on, that’s 1400 to 42000 uncertainties -- each of which would admit many different ways of being sufficiently operationalised. For reference, over the 4 years of the Good Judgement Project, roughly 500 questions were answered.
We’d of course be able to prune this space by focusing on the most important questions. Nonetheless, there seems to be a plausible case that scaling our ability to answer many questions is important if we want our forecasting efforts to succeed.
We see some evidence of this from the SciCast project, a prediction tournament on science and technology that ran from 2013-2015. The tournament organizers note the importance of scaling question generation through templates and the creation of a style guide. (See the [2015 Annual report](https://mason.gmu.edu/~rhanson/SciCast2015.pdf), p. 86.)
**3) So in order to forecast AI we must achieve economies-of-scale – making it cheap to write and answer the marginal question by efficiently reusing work across them.**
### AI Forecasting Dictionary
As a piece of the puzzle to solve the above problems, we made the [AI Forecasting Dictionary](https://parallel-forecast.github.io/AI-dict/). It is an open-source set of standards and conventions for precisely interpreting AI and auxiliary terms.Here’s an example entry:
> **Automatable**
> *See also: [Job](https://parallel-forecast.github.io/AI-dict/docs/dictionary.html#job)*
> A job is automatable at a time t if a machine can outperform the median-skilled employee, with 6 months of training or less, at 10,000x the cost of the median employee or less. Unless otherwise specified, the date of automation will taken to be the first time this threshold is crossed.
> Examples:
> \*As of 2019, [Elevator Operator](https://web.archive.org/web/20190318031620/https://qz.com/932516/over-the-last-60-years-automation-has-totally-eliminated-just-one-us-occupation/)
> Non-examples:
> \*As of 2019, Ambulance Driver
> \*As of 2019, Epidemiologist
(This definition is based on Luke Muelhauser’s [here](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines).)There are several mechanisms whereby building a dictionary helps solve the problems outlined above.
**Less overhead for writing and forecasting questions**
The dictionary reduces overhead in two ways: writers don’t have to reinvent the wheel whenever they operationalise a new thought, and forecasters can reduce the drag of constantly interpreting and understanding new resolutions. This makes it cheaper to both generate and answer the marginal question.
**A platform for spreading high initial costs over many future use cases**
There are a number of [common pitfalls](https://parallel-forecast.github.io/AI-dict/docs/best-practices) that can make a seemingly valid question ambiguous or misleading. For example, positively resolving the question:
> Will an AI lab have been nationalized by 2024?
by the US government nationalising GM as a response to a financial crisis, yet GM nonetheless having a self-driving car research division.Or forecasting:
> When will there be a superhuman Angry Birds agent using no hardcoded knowledge?
and realizing that there seems to be little active interest in the yearly benchmark competition (with performance even declining over years). This means that the probability entirely depends on whether anyone with enough money and competence decides to work on it, as opposed to what key components make Angry Birds difficult (e.g. physics-based simulation and planning) and how fast progress is in those domains.
Carefully avoiding such pitfalls comes with a high initial cost when writing the question. We can make that cost worth it by ensuring it is amortized across many future questions, and broadly used and built upon. A Dictionary is a piece of infrastructure that provides a standardised way of doing this. If someone spends a lot of time figuring out how to deal with a tricky edge case or a “spurious resolution”, there is now a Schelling point where they can store that work, and expect future users to read it (as well as where future users can expect them to have stored it).
**Version management**
When resolving and scoring quantitative forecasting questions, it’s important to know exactly what question the forecaster was answering. This need for precision often conflicts with the need to improve the resolution conditions from questions as we learn and stress-test them over time. For the Dictionary, we can use [best practices for software version management](https://semver.org/) to help solve this problem. As of this writing, the Dictionary is still in beta, with the latest release being v0.3.0.
**Open-source serendipity**
The Dictionary might be useful not just for forecasting, but also for other contexts where precisely defined AI terms are important. We open-sourced it in order to allow people to experiment with such use cases. If you do so in a substantial way, please [let us know](mailto:[email protected]).
### How to use the dictionary
If you use the Dictionary for forecasting purposes, please reference it to help establish it as a standard of interpretation.One way of doing this is by appending the tag [ai-dict-vX.Y.Z] at the end of the relevant stringFor example:
> I predict that image classification will be made robust against unrestricted adversarial examples by 2023. [ai-dict-v2]
or
> Will there be a superhuman Starcraft agent trained using less than $10.000 of publicly available compute by 2025? [ai-dict-v0.4]
In some cases you might want to tweak or change the definitions of a term to match a particular use case, thereby departing from the Dictionary convention. If so, then you SHOULD mark the terms receiving a non-standard interpretation with the “^” symbol. For example:
> I expect unsupervised language models to be human-level^ by 2024. [ai-dict-v1.3]
You might also want to add the following notice:
> For purposes of resolution, these terms are interpreted in accordance with the Technical AI Forecasting Resolution Dictionary vX.Y.Z, available at ai-dict.com. Any term whose interpretation deliberately departs from this standard has been marked with a ^."
### How to contribute to the dictionary
The AI Forecasting Dictionary is open-source, and you can contribute by making pull-requests to our GitHub or suggestions in the Google Doc version (more details [here](https://parallel-forecast.github.io/AI-dict/)). We especially welcome:
* Attempts to introduce novel definitions that capture important terms in AI (current examples include: “[module](https://parallel-forecast.github.io/AI-dict/docs/dictionary.html#module)”, “[transformative AI](https://parallel-forecast.github.io/AI-dict/docs/dictionary.html#transformative-artificial-intelligence)” and “[compute (training)](https://parallel-forecast.github.io/AI-dict/docs/dictionary.html#compute-training)”)
* Examples of forecasting questions which you wrote and which ended up solving/making progress on some tricky piece of operationalisation, such that others can build on that progress
**Footnotes**
[1] Some people might be compelled by an analogy to mathematics here: most of the work often lies in setting up the right formalism and problem formulation rather than in the actual proof (for example, Nash’s original fixed point theorems in game theory aren’t that difficultle once the set-up is in place, but realising why and how this kind of set-up was applicable to a large class of important problems was highly non-trivial).
[2] English Common Law is the clear example of how definitions and policies evolve over time to crystallize judgements and wisdom.
|
63c22e32-a57d-4e89-82f5-69cdfda8c9ef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Social status games might have "compute weight class" in the future
(Note: I don't think "social status games" are bad, although I think it's usually not healthy/helpful to focus on them as "social status games qua games." i.e. I have some motivation to be good at telling fun stories at parties, or write music that people will like, or throw interesting events. There's also things like "being a good listening" / "helpful advice-giver." Some of this is motivated intrinsic joy of the activity, but some is motivated by wanting to feel respected/cool. This seems fine in healthy doses)
A thought I had while reading We probably won't just play status games with each other after AGI (which I expected to say "we'll do other things than play status games" but AFAICT said "we'll play status games with AIs too").
A response I first had was "but, the AI will be so massively better than us at everything, it'd just be lame to be competing with them."
But, thinking about it a bit more: probably eventually, many/most people are uploads, and they are also running on optimized artificial brains. Bio humans may have access to various augmentations, either biological enhancement or tools like in The Gentle Romance.
I'm not sure about bio humans, but, probably there will eventually be less distinction between uploads and various types of AIs. There will be superintelligences with jupiter brains. There may be some uploads with jupiter brains too. Uploads will probably compete socially with AIs.
In martial arts, we have a concept of "weight class", to make fights "fair" and and interesting. I guess we actually sort of already have this for social status games – people tend to compete in arenas with similar socioeconomic status and background. (There's also a "locality" aspect, partially eroded by the internet, where you tend to be competing with people in your same geographic area. On the internet, there's still something like "scenes", like you might be trying to make cool Filk Songs in the filk community)
Somewhere out there are Old Money dynastie
|
b14adb07-a889-402d-9ad9-959b926f44ef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Can't Write Good Fiction
When Deepseek came out, there was a lot of fanfare about it being good at creative writing. I like AI and I love creative writing, so I decided to give it a spin. Specifically, I told Deepseek to do its best to write a story that might get accepted at Smokelong, one of the best flash fiction magazines in the business.
It came up with:
> The morning her shadow began unspooling from her feet, Clara found it coiled beneath the kitchen table like a serpent made of smoke. It didn’t mirror her anymore—not the tremble in her hands as she poured coffee, not the way she pressed a palm to her ribs, as if holding herself together. It just watched.
>
> “You’re not him,” she whispered, but the shadow rippled, ink-dark edges softening into a silhouette too broad, too familiar. She’d buried that shape six months ago, shoveled dirt over its echo. Yet here it was, pooling in the cracks of the linoleum.
The “story” continued from there, but you probably get the idea.
Superficially, the pieces are there. Evocative imagery, a dark emotional theme, sensory metaphors. I once taught a flash fiction course to undergraduates, and I would have been happy enough to see this up for workshop.
Also, though, there’s nothing there. Grief is the most basic possible flash fiction theme. “A shadow” is the most basic possible metaphor for grief. Not that Deepseek stops with one metaphor! We’re shoveling dirt over an echo here!
It’s hard to imagine anything sticking with me, reading prose like this. It’s pretty good to strip-mine for sentences to capture what I call the “gee whiz” feeling, a surprise at the novelty that a machine made art. But if I saw this on a literary review site, I’d immediately wonder what I was missing.
Compare to this, from an actual Smokelong story, by Allison Field Bell:
> She keeps saying she’s moving in. When we’re curled into each other in bed, she says yes and yes and yes. She says, I’ll pack my things. And then the next day, she refuses to hold my hand in the str
|
a74fc623-2bc7-4aa0-bb04-28535b1347da
|
trentmkelly/LessWrong-43k
|
LessWrong
|
-
-
|
04a23449-e53b-4f88-9f06-273602a1cc32
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What are the causality effects of an agents presence in a reinforcement learning environment
Suppose we have an agent A trying to optimise for a reward R in an environment S.
How can we tell that the presence of the agent does not affect the environment and the measurement(observation) is not only subject to the agent but the environment?
This is related with the measurement problem in quantum computing, we have an agent (a particle ) entangled in a quantum superposition, consider an electron with two possible configurations: up and down, a|↑⟩+b|↓⟩, when we measure the state, it collapses the wavefunction to a particular classical state, up or down.
Moreover, the observer effect, notes that measurements of certain systems cannot be made without affecting the system.
While the uncertainty principle argues that we cannot predict the value of a quantity with arbitrary certainty.
Another way to state the problem is, does measuring the state of the action affect the state of the environment?
How are the observations in physics different from the observations we make in RL?
Is the environment state at S1 causal to the state in S2 ?
|
08112c2a-69ca-4d23-bee1-b2de705ba3b2
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"(This post was co-authored by me (Cody Wild), Tessa Alexanian and Ray Arnold)TL;DR We have 2 days - until July 1 - to increase monthly donations by $2260 or raise $27,000 in lump sum , or else REACH will lose its current location and go on indefinite hiatusDo you care about the Berkeley REACH staying open? Has an open-access community space added value to your life? We have a few days to raise enough money to comfortably sign a one-year lease. If not, REACH will lose its current location, and the project will go into indefinite hiatus.Community support for REACH has been substantial, and we’re proud to see the number of people willing to pay seriously for something important. But we haven’t hit the level where it’s realistic to sign a full year lease. For that, we’d either need an additional $2260/month on Patreon, or up-front donations of around $27,000 to make us secure enough to do so. If this lease isn’t signed in the next few days, REACH will lose its current location at the end of July.If you’re excited about REACH, you’re probably already donating. We care about our community not overextending itself. One of the main benefits of REACH is to help those who haven’t yet gotten their financial footing stabilized, and if that’s your situation we don’t want you feeling pressure here.But if you can comfortably donate more to REACH – if you’re one of those 6 figure salary programmers who are excited by the prospect of having an honest-to-goodness community in a world that increasingly pushes towards atomization, we’d like you to consider whether you want to do more.If you’re willing and able to pledge more ongoing support than you already have, or want to make a large, one-time donation to help an early-stage project get its footing, the next few days are the time to come forward and do so.Background ReadingReflections on the Berkeley REACH - Sarah Spikes, June 7Thoughts on the REACH Patreon - Ray Arnold, April 30Humans Need Places - Ben Hoffman, April 19Current Financial RealityREACH has a funding gap. While this funding gap remains, it doesn’t make sense to sign the lease.AssetsREACH is earning $3,600 a month in the PatreonThe PayPal has brought in around $10,000 so farCostsThe landlord has been negotiated down to $5,500/month in rentShort-term room rentals have brought in $1000-$2000/month; we expect that to generally be lower outside of the summer months and think $1000/month is a reasonable year-averaged estimateFunding a 20 hour/week caretaker for the space, at minimum wage, would cost $15,110 over the yearMaintenance and capital costs, optimistically, are around $100/monthNot counting lost wages, about $10,000 of personal funds were already invested into this projectTaking into account solely the cost of rent, the funding gap on recurring and reliable funds is $900 per month, or $10,800 for the year. If we’re trying to fund the real cost of the space in a sustainable way, we estimate the size of this year’s funding gap at:That’s a funding gap of $2260 per month, or $27,120 for the year.Closing the Funding GapA few large donations to the PayPal, in the next few days, could offset enough of the $27,120 full-year shortfall that signing the lease would seem like a reasonable risk.In general, though, a Patreon donation pledge can be relied upon in a way that ad hoc donations just can’t. Many of you have, in the past, given PayPal donations— which have been valuable— but if REACH is to survive, more people for whom REACH is valuable need to commit to donating funds on a monthly basis.Why not move somewhere less expensive?First off, this is the Berkeley REACH. It’s not surprising that rent is kind of high.A house? There are houses in Berkeley with a similar amount of common space, but modifying them into a community center would require a permit from the city. This would, very optimistically, be a 5-8 month process.Other commercial space? There are commercial spaces in Berkeley that are less expensive on a per-square-foot level. However, they typically require 3-5 year leases. This project is still evolving and stabilizing financially, so that’s too long of a commitment to ask a leaseholder to take on.Even putting aside the hurdles of one of the above options, it would take at minimum several weeks to find an alternative, lower-rent venue, during which significant momentum on the project would be lost. The investments to the space, the donations, and the ways that REACH’s primary caretakers have configured their lives around it would be temporarily undone, and would have to be built up again.Why are we posting this?We won’t argue that this is effective altruism – while you could make a case for that, it depends on a lot of assumptions, and you may not want to count this in your Giving What We Can Pledge.Yet. Humans are built for living in community, and we believe that donating money to help members of your community connect and thrive will make us stronger. We all spend money on things to make us happier, and we believe that, out of the available options, investing in community is one of the richer and more sustainable ways to do that. To quote Ray Arnold:I think it’s very good that the rational/EA-sphere spends attention on helping far away or future people, even if they won’t return the favor. Part of living in the present era can and should include noticing that you have a lot of power, and the opportunity to use that power to help people at scale.But, while money is the unit of caring, there’s plenty of things to care about other than far away people.Freethinker-esque communities don’t just have trouble cooperating for grand altruistic projects. They struggle to cooperate to just buy themselves some god damn nice things._I’m of the opinion people do not spend nearly enough money investing in their own communities.—from Thoughts on the REACH PateronBy reducing the necessary activation energy, we make it easier for any given community member to think of a valuable community meetup series, or workshop, or activity, and just do it. There is a lot of bonding and mutual growth that comes out of events that would be too high-effort to initiate if an open space weren’t easily available.There aren’t many parts of urban, secular, atomized society that optimize for community. REACH is still very much an experiment in going against that grain and creating an intentional space to be together and build together. We haven’t yet seen the full results of that experiment, but what we’ve seen so far is enough to make us hopeful.Without more funding, the REACH will close. We hope you will join us in extending its run a little further."
The above is a chunk of text from a post created by a user on the website LessWrong.
Could you elaborate on the specific challenges and potential solutions in securing sustainable funding for community projects like REACH?
Securing sustainable funding for community projects like REACH poses several challenges.
1. **Limited Resources**: Community projects often rely on donations or grants, which are not always guaranteed. The pool of available funds can be limited, especially in times of economic downturn.
2. **Competition**: There are many worthy causes out there, and community projects must compete with other organizations for the same pool of funds. This competition can be fierce, particularly for grants from government agencies or foundations.
3. **Lack of Long-Term Commitment**: Many donors prefer to give one-time donations rather than commit to ongoing support. This can make it difficult for community projects to plan for the future and ensure their sustainability.
4. **High Operational Costs**: Running a community project can be expensive. Costs such as rent, utilities, maintenance, and staff wages can quickly add up. In high-cost areas like Berkeley, these expenses can be particularly high.
5. **Uncertainty**: Community projects like REACH are often experimental and evolving, which can make it difficult to predict future costs and revenue.
There are, however, potential solutions to these challenges:
1. **Diversifying Funding Sources**: By seeking funding from a variety of sources, community projects can reduce their dependence on any one donor or grant. This could include individual donations, corporate sponsorships, grants, fundraising events, and earned income strategies such as charging for services or selling products.
2. **Building Strong Relationships with Donors**: By communicating regularly with donors and showing them the impact of their support, community projects can build strong relationships that encourage ongoing support.
3. **Creating a Sustainable Business Model**: By finding ways to generate consistent revenue, community projects can ensure their long-term sustainability. This could involve charging for services, renting out space, or other earned income strategies.
4. **Reducing Costs**: By finding ways to reduce operational costs, community projects can make their funding go further. This could involve negotiating lower rent, reducing utility costs, or finding ways to operate more efficiently.
5. **Building a Strong Case for Support**: By clearly articulating the value of the project and its impact on the community, organizations can make a compelling case for support that attracts donors and grant funding.
In light of the strategies mentioned for securing sustainable funding, how might a community project like REACH implement these solutions, considering its unique challenges and opportunities?
Given REACH's unique challenges and opportunities, here are ways they might implement these solutions:
1. **Diversifying Funding Sources**: REACH could seek funding from a variety of sources. For instance, they could apply for grants aimed at community development or social innovation. They could also explore corporate sponsorships from companies interested in supporting community initiatives. Fundraising events could be organized, and a portion of the space could be rented out for private events to generate additional income.
2. **Building Strong Relationships with Donors**: REACH could develop a robust communication strategy to keep donors informed about their activities and the impact of their work. This could involve regular newsletters, social media updates, and personal thank-you notes. They could also host donor appreciation events or offer opportunities for donors to engage with the community and see the impact of their support firsthand.
3. **Creating a Sustainable Business Model**: REACH could consider charging a nominal fee for some services or events. This could provide a steady stream of income while still keeping the space accessible to those who cannot afford to pay. They could also consider a membership model, where individuals pay a regular subscription fee for access to exclusive benefits.
4. **Reducing Costs**: Given the high cost of rent in Berkeley, REACH could explore options for sharing the space with other organizations, which could help to distribute the cost. They could also look for ways to reduce utility costs, such as implementing energy-saving measures.
5. **Building a Strong Case for Support**: REACH should articulate the value they bring to the community. This could involve sharing success stories, providing data on the number of people served, and highlighting the unique role they play in fostering community in an increasingly atomized society. By making a compelling case for support, they can attract more donors and potentially secure larger donations.
In all of these strategies, it's important for REACH to leverage its unique strengths and opportunities. As a community hub in a city known for innovation and social consciousness, they are well-positioned to attract support from individuals and organizations who value community and connection.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.