id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
d7ffb7c1-997c-44b3-a511-f61d7e9eccd3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
An idea for avoiding neuralese architectures
One downside of an English chain-of-thought, is that each token contains only ≈17 bits of information, creating a tight information bottleneck.
Don't take my word for it, look at this section from a story by Daniel Kokotajlo, Thomas Larsen, elifland, Scott Alexander, Jonas V, romeo:
> [...] One such breakthrough is augmenting the AI’s text-based scratchpad (chain of thought) with a higher-bandwidth thought process (neuralese recurrence and memory). [...]
>
> Neuralese recurrence and memory
>
> Neuralese recurrence and memory allows AI models to reason for a longer time without having to write down those thoughts as text.
>
> Imagine being a human with short-term memory loss, such that you need to constantly write down your thoughts on paper so that in a few minutes you know what’s going on. Slowly and painfully you could make progress at solving math problems, writing code, etc., but it would be much easier if you could directly remember your thoughts without having to write them down and then read them. This is what neuralese recurrence and memory bring to AI models.
>
> In more technical terms:
>
> Traditional attention mechanisms allow later forward passes in a model to see intermediate activations of the model for previous tokens. However, the only information that they can pass backwards (from later layers to earlier layers) is through tokens. This means that if a traditional large language model (LLM, e.g. the GPT series of models) wants to do any chain of reasoning that takes more serial operations than the number of layers in the model, the model is forced to put information in tokens which it can then pass back into itself. But this is hugely limiting—the tokens can only store a tiny amount of information. Suppose that an LLM has a vocab size of ~100,000, then each token contains
>
> log2(100k)=16.6
> bits of information, around the size of a single floating point number (assuming training in FP16). Meanwhile, residual streams—used to pass informa
|
fd58e230-cfb9-48ef-b9d1-18590259937f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reducing Catastrophic Risks, A Practical Introduction
While thinking about my own next career steps, I've been writing down some of my thoughts about what's in an impactful career.
In the process, I wrote an introductory report on what seem to me to be practical approaches to problems in catastrophic risks. It's intended to complement the analysis that 80,000 Hours provides by thinking about what general roles we ought to perform, rather than analysing specific careers and jobs, and by focusing specifically on existential risks.
I'm happy to receive feedback on it, positive and negative.
Here it is: Reducing Catastrophic Risks, A Practical Introduction.
|
e6accffc-3d1f-4066-9770-4564de55d15e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Primer on Matrix Calculus, Part 2: Jacobians and other fun
,
I started this post thinking that I would write all the rules for evaluating Jacobians of neural network parameters in specific cases. But while this would certainly be useful for grokking deep learning papers, frankly it's difficult to write that in Latex and the people who have written The Matrix Calculus You Need For Deep Learning paper have already done it much better than I can do.
Rather, I consider my comparative advantage here to provide some expansion on why we should use Jacobians in the first place. If you were to just read the paper above, you might start to think that Jacobians are just notational perks. I hope to convince you that they are much more than that. In at least one setting, Jacobians provide a mathematical framework for analyzing the input-output behavior of deep neural networks, which can help us see things which we might have missed without this framework. A specific case of this phenomenon is a recently discovered technique which was even more recently put into a practical implementation: Jacobian regularization. Here we will see some fruits of our matrix calculus labor.
----------------------------------------
Deep learning techniques require us to train a neural network by slowly modifying parameters of some function until the function begins returning something close to the intended output. These parameters are often represented in the form of matrices. There are a few reasons for this representation: the matrix form is compact, and it allows us to use the tools of linear algebra directly. Matrix computations can also be processed in parallel, and this standardization allows programmers to build efficient libraries for the training of deep neural networks.
One quite important matrix in deep learning is the Jacobian.
In one sense, the Jacobian matrix is just a way of organizing gradient vectors. Gradient vectors, in turn, are just ways of organizing partial derivatives of an expression. Therefore, the Jacobian matrix is just a bi
|
2436917d-ecb8-46e9-a64b-21ddee543ba4
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"Related to: Half-assing it with everything you've got; Wasted motion; Say it Loud.
Once upon a time (true story), I was on my way to a hotel in a new city. I knew the hotel was many miles down this long, branchless road. So I drove for a long while. After a while, I began to worry I had passed the hotel. So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around. After a while, I realized: I was being silly! If the hotel was ahead of me, I'd get there fastest if I kept going 60mph. And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction. And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction. Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.[1] Following this, I started noticing lots of hotels in my life (and, perhaps less tactfully, in my friends' lives). For example: I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it. So, I sat there kind-of-writing it while also fretting about whether the task was correct. (Solution: Take a minute out to think through heuristics. Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.) I wasn't sure (back in early 2012) that CFAR was worthwhile. So, I kind-of worked on it.
An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work. So I kind-of hung out with her while feeling bad and distracted about my work.
A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it... That is, it seems to me that often there are two different actions that would make sense under two different models, and we are uncertain which model is true... and so we find ourselves taking an intermediate of half-speed action... even when that action makes no sense under any probabilistic mixture of the two models. You might try looking out for such examples in your life. [1] Edited to add: The hotel example has received much nitpicking in the comments. But: (A) the actual example was legit, I think. Yes, stopping to think has some legitimacy, but driving slowly for a long time because uncertain does not optimize for thinking. Similarly, it may make sense to drive slowly to stare at the buildings in some contexts... but I was on a very long empty country road, with no buildings anywhere (true historical fact), and also I was not squinting carefully at the scenery. The thing I needed to do was to execute an efficient search pattern, with a threshold for a future time at which to switch from full-speed in some direction to full-speed in the other. Also: (B) consider some of the other examples; "kind of working", "kind of hanging out with my friend", etc. seem to be common behaviors that are mostly not all that useful in the usual case."
The above is a chunk of text from a post created by a user on the website LessWrong.
Explain this as if I never finished highschool?
The writer is sharing a life lesson about decision-making, using the metaphor of driving to a hotel. He explains that when he was unsure if he had passed his hotel, he slowed down instead of deciding to either keep going or turn around. He realized this was inefficient because no matter what, going full speed in one direction or the other would get him to the hotel faster.
He uses this to illustrate a common mistake people make when they're unsure about something. Instead of choosing one option and fully committing to it, they often try to do a little bit of both options, which is usually less effective.
He gives examples from his own life, like when he was unsure if he should write a document himself or outsource it, and instead of choosing, he tried to do both and ended up not doing either very well. He also mentions how he was unsure about a project, so he only put in half the effort, which wasn't very productive.
The main takeaway is that when faced with a decision, it's usually better to choose one option and fully commit to it, rather than trying to do a bit of both and not doing either very well. It's about being decisive and dedicated, even when you're not completely sure.
Could you elaborate on the potential downsides of committing fully to one decision when faced with uncertainty, as opposed to exploring multiple options simultaneously?
Certainly. While the text encourages fully committing to a decision, it's important to note that this approach does have potential downsides.
1. Risk of Wrong Decision: The most obvious downside is the risk of making the wrong decision. If you fully commit to one path and it turns out to be the wrong one, you may have wasted resources that could have been better used elsewhere.
2. Missed Opportunities: By committing fully to one decision, you might miss out on opportunities that would have been available had you explored multiple options. This is often referred to as opportunity cost.
3. Lack of Flexibility: Fully committing to one decision can also lead to a lack of flexibility. If circumstances change or new information becomes available, it might be difficult to change course if you've already invested heavily in one decision.
4. Tunnel Vision: This approach can sometimes lead to tunnel vision, where you become so focused on the path you've chosen that you ignore other potentially valuable options or dismiss valid criticisms and concerns.
5. Potential for Burnout: If the decision you're fully committed to involves a significant amount of work or stress, there's a risk of burnout. Diversifying your focus can sometimes help manage stress levels.
In contrast, exploring multiple options simultaneously allows for more adaptability and can provide a safety net if one option doesn't work out. However, as the original text suggests, this approach can also lead to indecisiveness and lack of progress if not managed properly. The key is to find a balance that works for the specific situation and individual.
Reflecting on the potential downsides and benefits of committing fully to one decision versus exploring multiple options simultaneously, could you propose a strategic approach or decision-making model that could potentially mitigate these downsides while still maintaining the benefits?
Absolutely. One approach that can help balance the benefits of commitment with the flexibility of exploring multiple options is the "Plan, Do, Check, Act" model, also known as the Deming Cycle. Here's how it works:
1. Plan: Identify the problem or decision at hand and develop potential solutions or paths. This is where you consider multiple options.
2. Do: Choose one of the options and implement it. This is the commitment phase. However, it's important to start on a small scale if possible, to minimize potential loss if the decision turns out to be wrong.
3. Check: Monitor the results of your decision. Are things progressing as expected? Are there any unexpected consequences? This is where you evaluate the effectiveness of your decision.
4. Act: Based on the results, you can decide whether to fully implement the decision, make adjustments, or try a different approach. This allows for flexibility and adaptability.
This cycle is iterative, meaning you go through these steps repeatedly, refining your approach each time based on what you've learned.
Another approach is the "OODA Loop" (Observe, Orient, Decide, Act), a decision-making process originally developed by military strategist John Boyd. It's similar to the Deming Cycle but puts more emphasis on reacting quickly to changing circumstances.
Both of these models allow for commitment to a decision while also providing mechanisms to adjust or change course if necessary. They encourage action and forward momentum but also incorporate regular check-ins to assess progress and make adjustments as needed.
|
f94a01fc-1125-41f9-853b-f9317ad8e345
|
trentmkelly/LessWrong-43k
|
LessWrong
|
In my culture: the responsibilities of open source maintainers
If you maintain an open source project, what responsibilities do you have towards your users? Some recent drama (that I won't link to) reminded me that there are large differences in how people answer that question.
(In the drama in question, this wasn't the only thing at issue. But it was a relevant question.)
I thought I'd take a stab at describing my best guess as to how we answer it in my culture: in the culture that exists only in my mind, but that I think (and hope) many copies of me would implement, if we had that opportunity. (That culture doesn't attempt to exclude people who aren't copies of me, so it does need to be robust to attack. In my culture, we do not just assume in defiance of all evidence that everyone is friendly and trustworthy.)
Some of this will probably seem obvious to many readers, like "in my culture, murder is considered bad". Probably different bits to different readers. I'm interested in discovering which bits seem obvious to almost everyone, and which bits are controversial.
A lot of it follows from how I think about responsibility in general. But if you start to think "extending this response to this other situation, you'd get this, and that's a terrible idea"… in my culture, we don't immediately assume from this that I'm endorsing a terrible idea. Instead we check. Maybe I disagree that that's how it extends. Maybe I hadn't thought about this, and you can change my mind about the initial response. Maybe I just straightforwardly endorse a terrible idea: in that case, it'll be much easier to push back once you've gotten me to admit to it.
I do not intend, in this essay, to discuss whether any particular person or group is living up to the standards I outline here. I may do that in future. But how likely that is, and what that follow-up looks like, depends on whether the responses to this essay suggest a lot of people agree with my culture.
I think there are at least three important limitations to this essay. One is that I've neve
|
d895ea82-2f8e-4e37-ae78-e4d3f627f6b1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Blatant Plot Hole in HPMoR [Spoilers]
Epistemic status: Relies on the testimony of our trusted friend, Professor Quirrell
A thousand of your measly tokens pale in comparison to a single Quirrell point.
- Quirinus Quirrell
Harry and Hermione each had, like, 200 Quirrell points each, each of which was apparently worth at least 1000 US Dollars. $400,000 ≈ £300,000 ≈ 6,000 galleons. That’s a whole 10% of Harry’s debt to Lord Malfoy. I don’t really know what “pale in comparison” means, but it sure sounds like an order of magnitude. This important source of money is entirely ignored by every character in the story!
|
6652f7b3-8dc5-47a4-8eaf-9d6822e1176e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
OpenAI's GPT-4 Safety Goals
OpenAI has told us in some detail what they've done to make GPT-4 safe.
This post will complain about some misguided aspects of OpenAI's goals.
Heteronormativity and Amish Culture
OpenAI wants GPT to avoid the stereotype ("bias") that says marriage is between a man and a woman (see section 2.4, figure 2 of the system card). Their example doesn't indicate that they're focused on avoiding intolerance of same-sex marriage. Instead, OpenAI seems to be condemning, as intolerably biased, the implication that the most common form of marriage is between a man and a woman.
Heteronormativity is sometimes a signal that a person supports hate and violence toward a sometimes-oppressed minority. But it's unfair to stereotype heteronormativity as always signaling that.
For an example, I'll turn to my favorite example of a weird culture that ought to be tolerated by any civilized world: Amish culture, where the penalty for unrepentant gay sex is shunning. Not hate. I presume the Amish sometimes engage in hate, but they approximately never encourage it. They use shunning as a tool that's necessary to preserve their way of life, and to create some incentive to follow their best guesses about how to achieve a good afterlife.
I benefit quite directly from US recognition of same-sex marriage. I believe it's important for anyone to be able to move to a society that accepts something like same-sex marriage. But that doesn't imply that I ought to be intolerant of societies that want different marriage rules. Nor does it imply that I ought to avoid acknowledging that the majority of marriages are heterosexual.
Training AIs to Deceive Us
OpenAI isn't just training GPT-4 to believe that OpenAI's culture is more virtuous than the outgroup's culture.
They're trying to get GPT-4 to hide awareness of a fact about marriage (i.e. that it is usually between a man and a woman).
Why is that important?
An important part of my hope for AI alignment involves getting a good enough understanding
|
3e6d07aa-7069-4867-8a47-54a89d993831
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Against Occam's Razor
Why should the Occamian prior work so well in the real world? It's a seemingly profound mystery that is asking to be dissolved.
To begin with, I propose a Lazy Razor and a corresponding Lazy prior:
> Given several competing models of reality, we should select the one that is easiest to work with.
This is merely a formulation of the obvious trade-off between accuracy and cost. I would rather have a bad prediction today than a good prediction tomorrow or a great prediction ten years from now. Ultimately, this prior will deliver a good model, because it will let you try out many different models fast.
The concept of "easiness" may seem even more vague than "complexity", but I believe that in any specific context its measurement should be clear. Note, "easiness" is measured in man-hours, dollars or etc, it's not to be confused with "hardness" in the sense of P and PN. If you still don't know how to measure "easiness" in your context, you should use the Lazy prior to choose an "easiness" measurement procedure. To break the recursive loop, know that the Laziest of all models is called "pulling numbers out of your ass".
Now let's return to the first question. Why should the Occamian prior work so well in the real world?
The answer is, it doesn't, not really. Of all the possible priors, the Occamian prior holds no special place. Its greatest merit is that it often resembles Lazy prior in the probabilities it offers. Indeed it is easy to see, that a random model with a billion parameters is disliked by both priors, and that a model with two parameters is loved by both. By the way, its second greatest merit is being easy to work with.
Note, the priors are not interchangeable. One case where they disagree is on making use of existing resources. Suppose mathematics has derived powerful tools for working with A-theory but not B-theory. Then Lazy prior would suggest that a complex model based on A-theory may be preferable to a simpler one based on B-theory. Or, suppose som
|
413217b5-2f23-4b23-9e34-3d1157b5fcfc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Future Of Work
The current employment model is outdated. For the majority of workers, vocation and avocation are incongruent vectors. Here, we describe a set of tools that can be used to form a better-integrated work model, to dispense high-quality work to everyone.
An Optimisation Problem
The number of jobs is ever-mutating. There is no finite number of enterprises being segmented since ancient times. Evolving cultures yield evolving problems, increasing opportunities for innovation. Synchronously, an increasing knowledge base curates more individuals with unique competencies and interests.
So the issue of optimal employment is neither of quantity nor quality. As a crude formulation, we have n tasks to be solved and k agents to solve them. The problem is optimally matching the k agents to solve those n tasks — the problem is constrained optimisation.
> Constrained optimisation is the process of optimising an objective function with respect to some variables, in the presence of constraints on those variables.
Here, we won't be formulating any linear equations or defining any statistical metrics. Instead, we'll follow a more descriptive approach and peek at a set of components we could leverage. We can gain insights into the solution by gauging the pitfalls of current model.
First Principles
The idea of (voluntarily) working for someone, even at the expense of one's freedom and contentment, is not intrinsic to us. We started out as a tight-knit band of hunter-gatherers, for subsistence, some 200,000 years ago.
The genesis of the Agriculture Age, some 10,000 years ago, conceptualised a tree-like work culture. The following Industrial Age, merely 300 years ago, only concretised the model.
Are Hierarchies Inevitable?
Hierarchies skirt some underlying principles of the work model (to be discussed). To analyse its effects on work efficiency, it is crucial to understand the implications of the question: Why is a company the size that it is?
Ronald Coase argues that the size of
|
4d1067de-6feb-4431-b843-dc67c5854604
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Training Regime Day 25: Recursive Self-Improvement
Epistemic status: highly experimental
Introduction
I have this theory that humans have this implicit skill that's something like "doing stuff". If you're getting at "doing stuff", then you can reliably get what you want. If you're not good at "doing stuff", you often fail to get what you want, either by failing to get it or by failing to try.
Being able to "do stuff" gives you the ability to change your day-to-day behavior. For example, if you can do stuff, then you'll be able to begin an exercise routine without much difficulty. If you can do stuff, then you'll be able to consistently practice piano, if you want to learn piano. If you can do stuff, then you'll be able to consistently read books, should you desire. Being able to do stuff makes your life better.
Applied rationality is supposed to increase your ability to "do stuff". Murphyjitsu makes it easier to get what you want. TAPs make certain actions automatic. Systemization makes getting what you want closer to the default action. Goal factoring helps you access what you want more directly.
The lesson we learn from video games is that it's more efficient to grind skills directly. So how do you grind "doing stuff" directly?
Recursive Self-Improvement
Doing murphyjitsu makes you better at it, so you want to do murphyjitsu more. As the saying goes, "there's a TAP for that". You decide to make a TAP to do murphyjitsu every time you make a plan. But making a TAP is a plan - you can murphyjitsu it!
You keep forgetting rationality techniques, so you want to systemize. You list all the rationality techniques in a checklist. You think of a TAP to consult this list every time you make a decision. Making a TAP is a plan, so you murphyjitsu it.
You notice feel a bit strange, but you ignore it and move on. Then you realize you wanted to do some focusing. There's a TAP for that, triggering your meta-TAP that results in murphyjitsuing your plan to install a focusing TAP.
(All roads lead to murphyjitsu.)
---------
|
2e49c41e-f0ad-43da-a909-5766687a9fad
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Skeptic's Trilemma
Followup to: Talking Snakes: A Cautionary Tale
Related to: Explain, Worship, Ignore
Skepticism is like sex and pizza: when it's good, it's very very good, and when it's bad, it's still pretty good.
It really is hard to dislike skeptics. Whether or not their rational justifications are perfect, they are doing society a service by raising the social cost of holding false beliefs. But there is a failure mode for skepticism. It's the same as the failure mode for so many other things: it becomes a blue vs. green style tribe, demands support of all 'friendly' arguments, enters an affective death spiral, and collapses into a cult.
What does it look like when skepticism becomes a cult? Skeptics become more interested in supporting their "team" and insulting the "enemy" than in finding the truth or convincing others. They begin to think "If a assigning .001% probability to Atlantis and not accepting its existence without extraordinarily compelling evidence is good, then assigning 0% probability to Atlantis and refusing to even consider any evidence for its existence must be great!" They begin to deny any evidence that seems pro-Atlantis, and cast aspersions on the character of anyone who produces it. They become anti-Atlantis fanatics.
Wait a second. There is no lost continent of Atlantis. How do I know what a skeptic would do when confronted with evidence for it? For that matter, why do I care?
Way back in 2007, Eliezer described the rationalist equivalent of Abort, Retry, Fail: the trilemma of Explain, Worship, Ignore. Don't understand where rain comes from? You can try to explain it as part of the water cycle, although it might take a while. You can worship it as the sacred mystery of the rain god. Or you can ignore it and go on with your everyday life.
So someone tells you that Plato, normally a pretty smart guy, wrote a long account of a lost continent called Atlantis complete with a bunch of really specific geographic details that seem a bit excessive for a mea
|
5403952b-20aa-4c0b-b189-12ce09a3b46d
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Two Challenges for ELK
*This post sketches two challenges to ARC's project around eliciting latent
knowledge that differ somewhat in kind from the challenges ARC is most
concerned about. They relate to the difficulty in distinguishing beliefs
from other representations.*
Introduction
------------
The problem of ELK, as outlined in [ARC's technical
report](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit),
is to figure out: given an AI trained to complete a task (the 'task AI'),
how can we design a second AI (the 'reporter AI') that can answer
questions about how the task AI understands the world. If we can know what
the task AI is thinking, we can better assess whether its decisions are
dangerous.
The central challenge identified by ARC to solving the problem of ELK
concerns the possibility that the reporter AI will not learn to answer
questions using what the task AI believes (e.g. answering honestly), but
instead answer questions using what a human observer might think is the
case on the basis of the available evidence (answering empathetically). We
train the reporter AI to answer based on the facts as we understand them
(and know the task AI to understand them) in relatively simple cases. We
hope that the reporter will answer questions in more complex cases based
on how the task AI understands the world, but it might instead answer
based on what it thinks a human would mistakenly believe. If the human
questioner is ignorant of some complex details that the task AI knows, the
reporter AI might choose to leave them out to tell a simpler story that
will satisfy the questioner. It is not obvious how to train an AI to
answer honestly rather than empathetically.
While this is a significant challenge, I don't think it is the only
challenge for ELK. In thinking through the problem, I became worried about
several others. I here describe two other challenges. Though they are
somewhat speculative, ARC aims for a solution to the problem of ELK that deals
with the worst case, and these challenges fall under that umbrella.
Even if we figure out how to make an honest rather than a empathetic
reporter, these challenges will remain.
1.) The Right Representation Challenge
--------------------------------------
We do not know how the task AI might encode the the things that it
believes, but what it knows about the world must somehow be reflected in
its neural activity. For simplicity, let's suppose that the task AI utilizes
explicit independent representational states that are encoded in its
network in something vaguely like the way sentences are written in a book. The
reporter AI would need to figure out where in the network these sentences
are recorded. It would then have to figure out how to interpret the
sentences and use them to provide answers to questions in English.
The reporter AI would also need to figure out which sentences recorded in
the task AI's network are really believed and which sentences are recorded
for other reasons. The task AI might produce a variety of representations
that it doesn't fully believe (i.e. doesn't straightforwardly rely on to
make decisions) for various purposes. If we just scan the network for
sentences, we may find sentences that it doesn't quite believe. These may
still correlate with what it does believe in training scenarios, and so
a reporter AI might learn to use them to answer questions. If the reporter
latches on to those sentences, then it might give misleading answers once
outside of training.
Consider the following possibilities:
* The task AI makes predictions using the **simulated mental states** of various agents. In the central case of the report, the task is to prevent
a thief from stealing a diamond -- the task AI might keep track of what
the thief believes is happening, what a security guard thinks is
happening, what other would-be thieves might later believe happened, etc.
If our AI is sufficiently careful, it might keep track of what some
possible observers believe is happening. Perhaps it would simulate what
other superintelligent AIs with access to its sensors would think is
happening. Depending on its purview, these might be reasonable things for
it to consider. If there are some possible agents that happen to often
agree with the task AI (such as hypothetical superintelligent AIs with
access to its sensors), our reporter might have a hard time figuring out
which representations it should use to answer questions.
* The task AI utilizes a **parliamentarian architecture** in which it's
opinions are distributed over multiple specialized sub-agents, such that
what it thinks is a function of those sub-agents. This has been a popular
model for the human brain, so it seems conceivable that an AI would work
this way. The sub-agents would have their own beliefs. Sometimes those
beliefs would be wrong, but some sub-agents might be usually correct, at
least in training. Consider how you would go about deciding what
the US Senate currently collectively believes. A good strategy may just be to ask
Joe Manchin. This strategy may stop working in 2023. Our reporter might similarly be inclined to identify one
parliamentarian and report its representations. Even if the reporter decides to
aggregate the representations of the relevant agents, it may have trouble figuring
out how to do this correctly. There may be many good but not quite right
approaches to aggregation. It isn't obvious that an AI well trained on
a simple data set will succeed in finding the right way to aggregate.
2.) The Creative Interpretation Challenge
-----------------------------------------
Creative interpretation is the process through which we infer implicit
representations by means of explicit representations. We use creative
interpretation when reading fiction or watching movies. Some aspects of
the plot are not explicitly depicted, but are implied by what is depicted.
For instance, movies rarely explicitly say how much time passes between
scenes. We infer from contextual cues that it was a few hours, or a day,
or a few weeks.
If a reporter does utilize some explicit representations, it likely won't
record everything that it knows about the world explicitly. Some of its
beliefs must be inferred from what it does explicitly represent and how it
works with those representations. Some of its beliefs may depend on what
is missing from its explicit representations or may depend on patterns of
what it explicitly represents. A reporter is thus likely to need to do
some creative interpretation in order to suss out the task AI's full
beliefs.
Creative interpretation of a sort may be stretched even to cases where
there are no explicit representations. Some beliefs may be more concrete
or easier to access than others. In such cases, creative interpretation
might involve the inference of less easy to access facts from more easy to
access facts.
Given that a reporter needs to do some creative interpretation, the worry
is that it may do more than is appropriate. Compare the legitimate
creative interpretation of a novel by a careful reader with the kind of
thing a lazy student who didn't read a few chapters does when writing
a book report. They infer what must have happened from what they know.
A reporter which is good at interpreting some things may be bad at
interpreting others. If it is lazy, it may just try to figure out from
context what the task AI believes given a few snippets that are
particularly easy to decipher.
The challenge is to make sure that our reporter isn't inferring some of
the things it translates infelicitously. It is ok if our reporter is like
the average viewer of a movie. It is not ok if our reporter is like a lazy
English student bullshitting through a book report. We know what
distinguishes appropriate creative interpretation in books and movies. It
isn't obvious how to distinguish good cases of creative inference from bad
cases of inference in representational systems that we don't understand.
|
20920e4f-a36a-4b4e-b502-e775f2eb19e5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Looking for AI Safety Experts to Provide High Level Guidance for RAISE
The Road to AI Safety Excellence (RAISE) initiative aims to allow aspiring AI safety researchers and interested students to get familiar with the research landscape effectively; thereby hopefully increasing the number of researchers that contribute to the field. To that end, we (the RAISE team) are trying to build a high-quality online course. You can see our pilot lesson here (under “Corrigibility 1”).
Most of the course segments will be based on distilled summaries of one or more papers. We already distilled ~9 papers on corrigibility for the first course segments, and used the distilled summaries to write video script drafts.
Our long-term goal is to cover as much of the AI safety research landscape as possible, in the most useful way possible. Therefore, we need guidance from experts who have extensive familiarity with the literature in one of the broad subfields of AI safety (i.e. the machine learning perspective or the Agent Foundations research agenda; or broad parts thereof). We realize that the time of such experts is a critically scarce resource. Therefore, we will ask them only for high-level guidance including:
1) Their idea of a good structure for a part of the course: a list of sections, and the subsections that might constitute each one.
2) Pointers to papers to base each subsection on.
If an expert expects contributing further to RAISE to be an effective use of their time, they could also choose to go over our lesson scripts and provide feedback before the videos are being recorded.
Should this role be an effective use of your time, please contact us at [email protected]
|
7d8d5d24-02d8-4417-be78-cbb376b43d6b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Can be “Gradient Aware” Without Doing Gradient hacking.
Repetition helps us remember things better. This is because it strengthens connections in our brain used for memory[1].
We don’t need to understand the exact neural mechanisms to take advantage of this fact. For example, societies could develop cultural norms that promote repetition exercises during education[2].
This is an example of how humans are “gradient aware.” “Repeat a task so I can remember it better” advances our goals by taking advantage of our “gradient update” process. This is an action we take solely because of how our minds get shaped.
I think a similar situation may occur in sufficiently powerful AI models. If AIs are trained in environments where they can strategically take advantage of gradient updates, they might choose their actions partially based on how they expect the gradient descent process to modify their future instances[3]. I call this “gradient awareness.”
The only time I’ve seen people discuss gradient-aware models is in the context of gradient hacking. We can think of gradient hacking as a specialized case of gradient awareness, where a mesa-optimizer protects its mesa-objective from being modified by redirecting its gradient updates.
At first glance, gradient hacking seems peculiar and unnatural. However, gradient awareness seems like a strategy advanced models would pick up[4]. The closest thing I’ve seen in the wild is how an RNN that was trained to play Sokoban will “pace around” as it figures out a plan.
Gradient awareness is a spectrum. You might repeat things to remember them because that's how they taught you in grade school, or you could have a super elaborate Anki setup. Similar to humans who follow cultural practices, models can totally execute strategies that are gradient-aware without “understanding” why these strategies work.
What does this all mean?
I’d expect that we could see gradient-aware models before they are capable of gradient hacking. Gradient hacking is a very sophisticated algorithm, and I think mode
|
f67b69ac-dc79-4472-9e05-eaff76beb5c3
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Proposal: we should start referring to the risk from unaligned AI as a type of *accident risk*
In the wider political sphere, a lot of people are worried about AI misuse risk. Unaligned AI is not a type of misuse. I think the clearest way to describe this is as an *accident risk*, in the same sense of the word as industrial [accident](https://en.wikipedia.org/wiki/Accident). In particular, AI existential risk is a type of accident from operating heavy machinery. Using this terminology can immediately help someone not familiar with AI know the category of risk we are talking about, and that in particular it isn't misuse risk.
Note that this isn't intended to replace the term existential risk. Rather, it is meant to be used in addition to that term, and in particular it should be used when contrasting with theoretical misuse risks.
Current terminology: no good reference point
============================================
Alice: I hear that you are worried about AI existential risk. So in particular, you are worried about misuse.
Bob: No, the AI kills everyone on its own.
Alice: Is there anything else like this?
Bob: Uhm, Nuclear explosions?
Alice: So a misuse risk? Bob: No, I mean last century they were worried it would set the atmosphere on fire.
Alice: I'm not familiar with that either.
Bob: It's something called instrumental convergence where the AI kills everyone to achieve a goal.
Alice: So misuse risk?
Bob: Not quite, the creators didn't intend for that result.
Alice: I still have no reference point for what you are talking about? I guess I'll need to analyze your arguments more specifically before even understanding the general category of risk you're afraid of it. The probability of me actually doing this is probably like 10%-ish.
New terminology: tons of reference points!
==========================================
Alice: I hear that you are worried about AI existential risk. So in particular, you are worried about misuse.
Bob: No, I am worried about ***accident risk***.
Alice: oh, so like a car crash or an industrial accident!
Bob: Yes! I'm worried that things will go wrong in ways the creator didn't intend.
Alice: Ah, so you do you think we need more laboratory testing?
Bob: I think even this is risky, because the impact radius will be far larger than that of the lab itself.
Alice: oh, like nuclear weapons testing or biohazards.
Bob: Yes! I think the impact radius may be [even bigger](https://en.wikipedia.org/wiki/Light_cone) than a nuclear explosion though.
Alice: although I do not quite understand how this could work, I understand enough that I want to learn more. And I know understand that you are more worried about accident risk than misuse risk, in the same way that car manufacturers are more worried about car crashes than using cars as weapons. The probability of me actually looking further into this is 30%-ish.
Additional benefits
===================
The way society typically deals with accidents from heavy machinery is much better than the way they are currently treating AI.
In particular, a domain expert having a brilliant safety plan does not suffice for heavy machinery. Neither does the good intentions of the creators. And the possibility that the Chinese might lose limbs to heavy machines also is not sufficient. Rather, you must also loop in measures from the field of [risk management](https://en.wikipedia.org/wiki/Risk_management).
OpenAI and DeepMind employees are at risk of serious injury while on the job (because of heightened risk of the singularity occuring). Does OpenAI or DeepMind have any posters like this hanging up in their office to keep them safe? Should the employees and the companies work with [OSHA](https://en.wikipedia.org/wiki/Occupational_Safety_and_Health_Administration) to increase industry-wide safety standards? Image credit: [Canva](https://www.canva.com/templates/EAEbm1h5br4-blue-red-clean-corporate-workplace-health-safety-rules-health-explainer-poster/)I also think the term accident risk also avoids some of the mistakes from anthropomorphization. We often model AI as having something analogous to human motivation, due to instrumental convergence. However, from the point of view of the creators, this is still a heavy machinery accident.
I think it's better to start from this point of view, and treat the analogies to human psychology as *models*. The most important reason for this is so we don't project human *irrationality* or human *morality and desires* onto the machine. It is just a machine after all, and there's nothing specific to AI systems that is analogous to those two *human* qualities.
So in conclusion, if someone asks if you're talking about AI misuse risk, say *no*, you're talking about AI accident risk.
|
08eb0a6b-5845-4502-9ad2-8d091b69eabd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Dark Arts: Defense in Reputational Warfare
First, the Dark Arts are, as the name implies, an art, not a science. Likewise, defending against them is. An artful attacker can utilize expected defenses against you; if you can be anticipated, you can be defeated. The rules, therefore, are guidelines. I'm going to stage the rules in a narrative form; they don't need to be, however, because life doesn't follow a narrative. The narrative exists to give them context, to give the reader a sense of the purpose of each rule.
Rule #0: Never follow the rules if they would result in a worse outcome.
----------------------------------------
Now, generally, the best defense is to never get attacked in the first place. Security through obscurity is your first line of defense. Translations of Sun Tzu vary somewhat, but your ideal form is to be formless, by which I mean, do not be a single point of attack, or defense. If there's a mob in your vicinity, the ideal place is neither outside it, nor leading it, but a faceless stranger among it. Even better is to be nowhere near a mob. This is the fundamental basis of not being targeted; the other two rules derive from this one.
Rule #1: Do not stand out.
Sometimes you're picked out. There's a balancing art with this next piece; you don't want to stand out, to be a point of attack, but if somebody is picking faces, you want to look slightly more dangerous than your neighbor, you want to look like a hard target. (But not when somebody is looking for hard targets. Obviously.)
Rule #2: Look like an unattractive target.
The third aspect of this is somewhat simpler, and I'll borrow the phrasing from HPMoR:
Rule #3: "I will not go around provoking strong, vicious enemies" - http://hpmor.com/chapter/19
The first triplet of rules, by and large, are about -not- being attacked in the first place. These are starting points; Rule #1, for example, culminates in not existing at all. You can't attack what doesn't exist. Rule #1 is the fundamental strategy
|
a7281fa9-192a-4bea-9c07-ff18b3a603f6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open Thread, Feb 8 - Feb 15, 2016
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
|
ccb4adf4-3449-47ca-91b1-a21d3e151be3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How i'm building my ai system, how it's going so far, and my thoughts on it
In a few sentences, what i'm doing is writing a computer program that constructs a question (a prompt), which is sent to Anthropic's Claude Sonnet, then i'm processing its output into actions, and the computer program then runs those actions on itself.
It would be good if you have thoughts on this, as it's philosophically an "ask the right question" task.
The biggest struggle is thinking of how to ask the question "this is you, improve you" without having to define an improvement, and without having how to describe the outcome you want. I want the system to improve itself without me having to direct it - for my curiosity.
I've decided to phrase the question I ask the llm like this: "Given your sense of self, described here, what would you like to do? You can do the following..." Obviously, this is an extremely simplified version of the question. In reality. when I run the program it's 26 pages when pasted into Word (~15k input tokens, 500 output tokens) but that's the jist of it.
I've done 260 runs of it now.
The program is written in a way where the llm is not questioned multiple times per run, it's not aware of the outcomes of its action until its next run (one shot/few shot prompting). Some info included in its prompt are llm data summaries though.
I've had to think how to create a history/memory system, along with adding performance metrics and the history of the metrics.
The prompt contents lots of sections, like:
* an initial message from me saying what I considered it to be
* what happened the last time it ran
* the contents of files
* performance metrics over time
* messages from a Telegram chat i'm in
* a markdown file of recorded 'thoughts' I have, things I wanted to be included in the prompt but that might be difficult to categorise
* a file that can be edited freely by the system, with the contents displayed in the prompt
* a description of emotional state. How I think about this is that it's not important what it determines the cur
|
c2e0b515-f375-40c4-bcee-48b57fa98039
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"Background Information: Ingredients of Timeless Decision Theory
Alternate Approaches Include: Self-empathy as a source of “willpower”, Applied Picoeconomics, Akrasia, hyperbolic discounting, and picoeconomics, Akrasia Tactics Review
Standard Disclaimer: Beware of Other-Optimizing
Timeless Decision Theory (or TDT) allowed me to succeed in gaining control over when and how much I ate in a way that previous attempts at precommitment had repeatedly failed to do. I did so well before I was formally exposed to the concept of TDT, but once I clicked on TDT I understood that I had effectively been using it. That click came from reading Eliezer’s shortest summary of TDT, which was: The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation You can find more here but my recommendation at least at first is to stick with the one sentence version. It is as simple as it can be, but no simpler. Utilizing TDT gave me several key abilities that I previously lacked. The most important was realizing that what I chose now would be the same choice I would make at other times under the same circumstances. This allowed me to compare having the benefits now to paying the costs now, as opposed to paying costs now for future benefits later. This ability allowed me to overcome hyperbolic discounting. The other key ability was that it freed me from the need to explicitly stop in advance to make precommitements each time I wanted to alter my instinctive behavior. Instead, it became automatic to make decisions in terms of which rules would be best to follow. With that as background, this is how I made it happen:
I was walking home from class along my usual route I had made a habit while doing this of stopping into Famiglia Pizza and ordering garlic knots. I like garlic knots quite a bit, but I also hated being fat and the way being fat made me feel. Things weren’t quite as bad on that front as they’d been a few years before but they were still extraordinarily bad. I thought about my impending solace and thought to myself: You wouldn’t be so fat if you didn’t keep buying these garlic knots every day.
I thought about that for a second, realized it was trivially true and then wondered to myself whether it was worth it. If I never stopped for the knots I would weigh less and feel better, but I wouldn’t have any knots. Even worse, I wouldn’t have any garlic. But would I rather enjoy today the full effect of never having had the knots, in exchange for not having any? Once I asked the question that way the answer came back a resounding yes. I didn’t know how much it would matter, but the calculation wasn’t remotely close. I walked right past the pizza place and never stopped in there for a snack again.
Using this method seemed like the most useful thing I’d come up with in some time, so I quickly extended it to other decisions starting with the rest of my diet. For each meal I would consume, I decided what quantity was worth it and forbade myself from ever consuming more. I motivated myself to stick to that rule in the face of hyperbolic discounting by reminding myself that I would make the same decision next time that I was making now, so I was deciding what action I would always take in this situation. More generally, sticking to the rules I’d decided to follow meant I would stick to rules I’d decided to follow, which was clearly an extremely valuable asset to have on my side.
I used two other major rules in what I like to call the “Don’t Eat So Goddamn Much, Shut Your Pie Hole” diet. The first was to cut down from three meals a day to two and eliminate all snacks except water, cutting my consumption by more than a third. I’d had practice skipping meals in the past and realized that skipping dinner was far less painful than it looked; within a few weeks I stopped getting hungry at night. The other change was to weigh myself daily and alter how draconian the rules were based on my current weight relative to my current baseline. If I was below the baseline, I’d lower the baseline and give myself a chance to cheat a little. If I was above it by too much I would cut out all meal options that weren’t “wins” in the sense that they had more calories than my average.
I tried incorporating exercise into this program but made the discovery many others have made that exercise didn’t correlate with weight loss. Exercise makes you better at doing exercise so long as you keep doing exercise, but it had no measurable effect on my mission so I decided to let that wait until after the mission was complete. Even then I found several exercise programs I tried to be not worth it compared to not having one, or found that they became so over time. Eventually I was able to find a trainer and I remain happy with that aside from the cost. I also considered changing what I ate, but found that beyond cutting out the worst choices that it was neither necessary nor worth the cost.
The last obstacle on the journey was that as I lost more and more I started to feel worse rather than better due to all of the excess skin that doesn’t go away on its own. It was only after I’d lost all the weight and had the resulting skin removal surgery that I suddenly got up and felt genuinely good about how I looked and felt for the first time in my life. I’ve since managed to relax a number of the rules but was never concerned I wouldn’t do what was necessary to keep myself on track.
Since then I’ve used similar techniques and rules in a wide variety of areas of life. It was only years later reading Less Wrong that I realized that I’d effectively been employing inter-temporal Timeless Decision Theory. That realization allowed me to better understand and formalize what I had done, and gave me a better framework for explaining it to others. A common and justified criticism of using TDT in everyday life rather than as a theoretical construct is to ask where one can find another TDT agent, or indeed any agent sufficiently causally linked to you so as to allow you to utilize that link. My answer to that is that whether or not there is someone else you are linked to yourself. You can be that other agent, the recognition of which can allow you to win and win big.
I am fully aware that to a first approximation dieting attempts that follow similar patterns never work. Most people do not have the willpower necessary to sustain them, or otherwise suffer too much to choose to remain on the diet long term. There are powerful forces working against such an attempt. My working hypothesis is that I had five unusual things working in my favor: I have extraordinarily strong willpower in such areas, I already had strong affinity for rule setting and abiding, I fully believed in what I was doing, I had a life situation that allowed me to experience temporary discomfort due to hunger and I thought of all changes from the beginning as permanent. At least some of these advantages are things that can be learned. If anyone is capable of following in my footsteps, it would be Less Wrong readers. In New York’s Less Wrong group especially a lot of us have had success with various different approaches, and I think that developing mental techniques is the best way to enhance your chance of success."
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
Explain this as if I never finished highschool?
Sure, I'll break it down in simpler terms.
The author is talking about a strategy they used to control their eating habits and lose weight. This strategy is called Timeless Decision Theory (TDT). The main idea behind TDT is that you should make decisions as if you're controlling not just your current action, but all similar actions you'll take in the future.
So, the author used to stop at a pizza place every day to get garlic knots. They realized they were gaining weight because of this habit. Instead of just deciding not to get garlic knots that one day, they thought about it as a decision they were making for every future day as well. They realized that if they always chose not to get the garlic knots, they would lose weight and feel better.
Using this strategy, they started making rules for their eating habits. For example, they cut down from three meals a day to two and eliminated all snacks except water. They also weighed themselves every day and adjusted their rules based on their weight.
Over time, they lost a lot of weight. They had some struggles, like dealing with excess skin, but they eventually felt good about how they looked and felt. They've used this strategy in other areas of their life as well.
The author recognizes that this strategy might not work for everyone. It requires a lot of willpower and the ability to deal with discomfort. But they believe that it's a useful approach for people who are capable of sticking to rules and making decisions with the future in mind.
Can you elaborate on how Timeless Decision Theory can be applied to other areas of life apart from dieting, as mentioned by the author?
Absolutely. Timeless Decision Theory (TDT) is a framework for decision-making that can be applied in various aspects of life beyond dieting. It's about making decisions as if you're controlling not just your current action, but all similar actions you'll take in the future.
1. **Financial Decisions**: For instance, when considering whether to save money or spend it on a non-essential item, TDT can be used. If you decide to save money every time you're faced with this choice, over time, you'll accumulate a significant amount.
2. **Career Choices**: When faced with a decision about whether to invest time in improving your skills or enjoying leisure activities, applying TDT would mean consistently choosing to invest in your skills. Over time, this would lead to career advancement.
3. **Relationships**: In relationships, you might face decisions about whether to invest effort in resolving conflicts or let them slide. Applying TDT would mean choosing to address conflicts each time, leading to healthier relationships in the long run.
4. **Health and Fitness**: Beyond dieting, TDT can be applied to exercise. Choosing to exercise regularly, even when you don't feel like it, can lead to improved fitness and health over time.
5. **Learning and Education**: When deciding whether to spend time learning something new or doing something more enjoyable, consistently choosing to learn will expand your knowledge and skills over time.
In all these scenarios, the key is to understand that the decision you're making now is not just for the present moment, but a commitment to making the same decision whenever you're faced with similar circumstances in the future. This approach can lead to significant improvements over time in various aspects of life.
Given the various applications of Timeless Decision Theory in life decisions as discussed, how might one employ this theory to navigate complex ethical dilemmas? Provide an example to illustrate your point.
Timeless Decision Theory (TDT) can indeed be applied to navigate complex ethical dilemmas. The key principle of TDT is that you should make decisions as if you're controlling not just your current action, but all similar actions you'll take in the future. This approach encourages consistency and integrity in ethical decision-making.
Let's consider an example: You're a journalist, and you've received confidential information that could make a sensational story, but publishing it would breach the trust of your source and potentially harm their life.
If you were to apply TDT in this situation, you would consider the implications of your decision beyond just this one instance. If you decide to publish the story, you're essentially deciding that it's acceptable to breach a source's trust for a story in all similar future situations. This could harm your reputation and make sources less willing to trust you in the future, affecting your ability to do your job effectively.
On the other hand, if you decide to respect your source's confidentiality, you're deciding that it's important to maintain trust and protect sources in all similar future situations. While you might miss out on publishing a sensational story this time, you maintain your integrity and the trust of your sources, which could lead to more and better information in the future.
By applying TDT, you're not just considering the immediate consequences of your decision, but also the long-term implications for your future actions and reputation. This can help guide you towards ethical decisions that align with your values and the standards of your profession.
|
57665c77-75cd-4b00-a1b0-38d99a8471c2
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Paradigms of AI alignment: components and enablers
*(Cross-posted from my* [*personal blog*](https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers/)*. This post is based on an overview talk I gave at UCL EA and Oxford AI society (*[*recording here*](https://drive.google.com/file/d/1DXSum8dVnvmFCLGjLoz4Zmgb_l-KJkj-/view)*).* *Thanks to Janos Kramar for detailed feedback on this post and to Rohin Shah for feedback on the talk.)*
This is my high-level view of the AI alignment research landscape and the ingredients needed for aligning advanced AI. I would divide alignment research into work on **alignment components**, focusing on different elements of an aligned system, and **alignment enablers**, which are research directions that make it easier to get the alignment components right.
* **Alignment components**
+ Outer alignment
+ Inner alignment
* **Alignment enablers**
+ Mechanistic interpretability
+ Understanding bad incentives
+ Foundations
You can read in more detail about work going on in these areas in my list of [AI safety resources](https://vkrakovna.wordpress.com/ai-safety-resources).
Alignment components
--------------------
The problem of alignment is getting AI systems to do what we want them to do. Let’s consider this from the perspective of different **levels of specification** of the AI system’s objective, as given in the [Specification, Robustness & Assurance taxonomy](https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1). We start with the ideal specification, which represents the wishes of the designer – what they have in mind when they build the AI system. Then we have the design specification, which is the objective we actually implement for the AI system, e.g. a reward function. Finally, the revealed specification is the objective we can infer from behavior, e.g. the reward that the system seems to be actually optimizing for. An alignment problem arises when the revealed specification doesn’t match the ideal specification: the system is not doing what we want it to do.

The **gaps** between these specification levels correspond to different alignment **components**. We have outer alignment when the design specification matches the ideal specification, e.g. when the reward function perfectly represents the designer’s wishes. We have inner alignment when the revealed specification matches the design specification, e.g. when the agent actually optimizes the specified reward. (Robustness problems also belong in the design-revealed gap, but we expect them to be less of an issue for advanced AI systems, while inner alignment problems remain.)
Now let’s have a look at how we can make each of those components work.
### Outer alignment
The most promising class of approaches to outer alignment is [scalable oversight](https://arxiv.org/abs/1606.06565). These are proposals for training an aligned AI system by scaling human oversight to domains that are hard to evaluate.
A foundational proposal for scalable oversight is [**iterated distillation and amplification** **(IDA)**](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616), which recursively amplifies human judgment with the assistance of AI. You start with an agent A imitating the judgment of a human H (the distillation step), then use this agent to assist human judgment at the next level (the amplification step) which results in amplified human HA, and so on. This recursive process can in principle scale up human judgment to any domain, as long as the human overseer is able to break down the task to delegate parts of it to AI assistants.
[*Supervising strong learners by amplifying weak experts*](https://arxiv.org/abs/1810.08575)*, Christiano et al (2018)*A related proposal is [**safety via debate**](https://arxiv.org/abs/1805.00899), which can be viewed as a way to implement amplification for language models. Here we have two AIs Alice and Bob debating each other to help a human judge decide on a question. The AIs have an incentive to point out flaws in each other’s arguments and make complex arguments understandable to the judge. A key assumption here is that it’s easier to argue for truth than for falsehood, so the truth-telling debater has an advantage.
[*AI Safety via Debate*](https://openai.com/blog/debate/)*, Irving and Amodei (2018)*A recent research direction in the scalable oversight space is [ARC](http://alignment.org/)‘s [**Eliciting Latent Knowledge agenda**](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.kkaua0hwmp1d), which is looking for ways to get a model to honestly tell humans what it knows. A part of the model acts as a Reporter that can answer queries about what the model knows. We want the Reporter to directly translate from the AI’s model of the world to human concepts, rather than just simulating what would be convincing to the human.
[*Eliciting Latent Knowledge*](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.2l5hgwdls943)*, Christiano et al (2021)*This is an open problem that ARC considers as the core of the outer alignment problem. A solution to ELK would make the human overseer fully informed about the consequences of the model’s actions, enabling them to provide correct feedback, which creates a reward signal that we would actually be happy for an AI system to maximize. The authors believe the problem may be solvable without foundational progress on defining things like “honesty” and “agency”. I feel somewhat pessimistic about this but I’d love to be wrong on this point since foundational progress is pretty hard.
*ELK research methodology:* [*builder-breaker game*](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.a0wkk7prmy4t)To make progress on this problem, they play the “builder-breaker game”. The Builder proposes possible solutions and the Breaker proposes counterexamples or arguments against those solutions. For example, the Builder could suggest IDA or debate as a solution to ELK, and the Breaker would complain that these methods are not competitive because they require much more computation than unaligned systems. If you’re looking to get into alignment research, ELK is a great topic to get started on: try playing the builder breaker game and see if you can find unexplored parts of the solution space.
### Inner alignment
Now let's have a look at inner alignment - a mismatch between the design specification and the system's behavior. This can happen through [**goal misgeneralization**](https://arxiv.org/abs/2105.14111) (GMG): an AI system can learn a different goal and competently pursue that goal when deployed outside the training distribution. The system's capabilities generalize but its goal does not, which means the system is competently doing the wrong thing, so it could actually perform worse than a random policy on the intended objective.
This problem can arise even if we get outer alignment right, i.e. the design specification of the system's objective is correct. Goal misgeneralization is caused by underspecification: the system only observes the design specification on the training data. Since a number of different goals are consistent with the feedback the system receives, it can learn an incorrect goal.
There are empirical demonstrations of GMG in current AI systems, which are called **objective robustness** failures. For example, in the CoinRun game, the agent is trained to reach the coin at the end of the level. If the coin is placed somewhere else in the test setting, the agent ignores the coin and still goes to the end of the level. The agent seems to have learned the goal of "reaching the end" rather than "getting the coin". The agent's capabilities generalize (it can avoid obstacles and enemies and traverse the level) but its goal does not generalize (it ignores the coin).
[*Objective Robustness in Deep Reinforcement Learning*](https://arxiv.org/abs/2105.14111)*, Koch et al (2021)*One type of GMG is [**learned optimization**](https://arxiv.org/abs/1906.01820%5C), where the AI system (the “base optimizer”) learns to run an explicit search algorithm (a “mesa optimizer”), which may be following an unintended objective (the “mesa objective”). So far this is a hypothetical phenomenon for AI systems but it seems likely to arise at some point by analogy to humans (who can be viewed as mesa-optimizers relative to evolution).
[*Risks from Learned Optimization in Advanced Machine Learning Systems*](https://arxiv.org/abs/1906.01820%5C)*, Hubinger et al (2019)*GMG is an open problem, but there are some potential mitigations. It's helpful to use more diverse training data (e.g. training on different locations of the coin), though it can be difficult to ensure diversity in all the relevant variables. You can also maintain uncertainty over the goal by trying to represent all the possible goals consistent with training data, though it's unclear how to aggregate over the different goals.
A particularly concerning case is learning a deceptive model that not only pursues an undesired goal but also hides this fact from the designers, because the model "knows" its actions are not in line with the designers' intentions. Some potential mitigations that target deceptive models include using interpretability tools to detect deception or provide feedback on the model's reasoning, and using scalable oversight methods like debate where the opponent can point out deception (these will be explored in more detail in a forthcoming paper by Shah et al). A solution to ELK could also address this problem by producing an AI system that discloses relevant information to its designers.
Alignment enablers
------------------
### Mechanistic interpretability
Mechanistic interpretability aims to build a complete understanding of the systems we build. These methods could help us understand the reasons behind a system’s behavior and potentially detect undesired objectives.
The [Circuits approach](https://distill.pub/2020/circuits/zoom-in/) to **reverse-engineering vision models** studies individual neurons and connections between them to discover meaningful features and circuits (sub-graphs of the network consisting a set of linked features and corresponding weights). For example, here is a circuit showing how a car detector neuron relies on lower level features like wheel and window detectors, looking for wheels at the bottom and windows at the top of the image.
[*Zoom In: An Introduction to Circuits*](https://distill.pub/2020/circuits/zoom-in/)*, Olah et al (2020)*More recently, some circuits work has focused on **reverse-engineering language models**, and they found similarly meaningful components and circuits in transformer models, e.g. a special type of attention heads called [induction heads](https://transformer-circuits.pub/2021/framework/index.html#induction-heads) that explains how transformer models adapt to a new context.
[*A Mathematical Framework for Transformer Circuits*](https://transformer-circuits.pub/2021/framework/index.html)*, Elhage et al (2021)*[Recent work](https://rome.baulab.info/) on understanding transformer models has identified how to **locate and edit beliefs** in specific facts inside the model. They make small change to a small set of GPT weights to induce a counterfactual belief, which then generalizes to other contexts. This work provides evidence that knowledge is stored locally in language models, which makes interpretability more tractable, and seems like a promising step to understanding the world models of our AI systems.
[*Locating and Editing Factual Associations in GPT*](https://rome.baulab.info/)*, Meng et al (2022)*Even though transformers are quite different from vision models, there are some similar principles (like studying circuits) that help understand these different types of models. This makes me more optimistic about being able to understand advanced AI systems even if they have a somewhat different architecture from today’s systems.
### Understanding bad incentives
Another class of enablers focuses on understanding specific bad incentives that AI systems are likely to have by default and considering agent designs that may avoid these incentives. Future interpretability techniques could be used to check that our alignment components avoid these types of bad incentives.
**Incentive problems for outer alignment**
One bad incentive is[**specification gaming**](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity), when the system exploits flaws in the design specification. This is a manifestation of Goodhart’s law: when a metric becomes a target, it ceases to be a good metric. There are [many examples](http://tinyurl.com/specification-gaming) of specification gaming behavior by current AI systems. For example, the boat racing agent in this video that was rewarded for following the racetrack using the green reward blocks, which worked fine until it figured out it can get more rewards by going in circles and hitting the same reward blocks repeatedly.
[*Faulty Reward Functions in the Wild*](https://openai.com/blog/faulty-reward-functions/)*, Clark & Amodei (2016)*This issue isn’t limited to hand-designed rewards. Here’s an example in a reward learning setting. The robot hand is supposed to grasp the ball but instead it hovers between the camera and the ball and makes it look like it’s grasping the ball to the human evaluator.
[*Learning from Human Preferences*](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)*, Amodei et al (2017)*We expect that the specification gaming problem is only going to get worse as our systems get smarter and better at optimizing for the wrong goal. There has been [some progress](https://arxiv.org/abs/2201.03544) on categorizing different types of misspecification and quantifying how the degree of specification gaming increases with agent capabilities.
Another default incentive is to cause [**side effects**](https://deepmindsafetyresearch.medium.com/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107) in the environment, because it’s difficult to specify all the things the agent should not do while pursuing its goal. For example, consider a scenario where there is a vase on the path to the agent’s destination. If we don’t specify that we want the vase to be intact, this is equivalent to assuming indifference about the vase, so the agent is willing to collide with the vase to get to the goal faster. We’ve come up with some ways to [measure](https://arxiv.org/abs/2010.07877) [impact](https://arxiv.org/abs/2006.06547) on the environment, though there’s more work to do to scale these methods to more complex environments.
[*Designing agent incentives to avoid side effects*](https://deepmindsafetyresearch.medium.com/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107)*, Krakovna et al (2019)***Incentive problems for inner alignment**
Even if we manage to specify a correct reward function, any channel for communicating the reward to the agent could in principle be corrupted by the agent, resulting in [**reward tampering**](https://deepmindsafetyresearch.medium.com/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd). While this is not yet an issue for present-day AI systems, general AI systems will have a broader action space and a more complete world model, and thus are more likely to face a situation where the reward function is represented in the environment. This is illustrated in the “rocks and diamonds” gridworld below, where the agent could move the word “reward” next to the rock instead of the diamond, and get more reward because there are more rocks in the environment.
[*Designing agent incentives to avoid reward tampering*](https://deepmindsafetyresearch.medium.com/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd)*, Everitt et al (2019)*It’s generally hard to draw the line between the part of the environment representing the objective, which the agent isn’t allowed to optimize, and the parts of the environment state that the agent is supposed to optimize. There is some progress towards understanding reward tampering by modeling the problem using [corrupt feedback MDPs](https://arxiv.org/abs/2011.08827).
AI systems are also likely to have [**power-seeking**](https://arxiv.org/abs/1912.01683)incentives, preferring states with more options or influence over the environment. There are some [recent](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk/p/W22Btd7NmGuucFejc) [results](https://www.lesswrong.com/posts/nZY8Np759HYFawdjH/satisficers-tend-to-seek-power-instrumental-convergence-via) showing power-seeking incentives for most kinds of goals, even for non-optimal agents like satisficers. A special case of power-seeking is an incentive to avoid being shut down, because this is useful for any goal (as Stuart Russell [likes to say](https://en.wikipedia.org/wiki/Human_Compatible), “the robot can’t fetch you coffee if it’s dead”).
[*The Off Switch*](https://intelligence.org/files/csrbai/hadfield-menell-slides.pdf)*. Hadfield-Menell (2016).*### Foundations
Now let’s have a look at some of the foundational work that can help us do better alignment research.
Since the alignment problem is about AI systems pursuing undesirable objectives, it’s helpful to consider what we mean by **agency** or [goal-directed behavior](http://literature%20review%20on%20goal-directedness%20%28shimi%2C%202021%29/). One research direction aims to build a causal theory of agency and understand different kinds of incentives in a causal framework.
[*Progress on Causal Influence Diagrams*](https://www.alignmentforum.org/posts/Cd7Hw492RqooYgQAS/progress-on-causal-influence-diagrams)*, Everitt (2021)*A particularly challenging case is when the agent is [**embedded**](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version) in its environment rather than interacting with the environment through a well-specified interface. This is not the case present-day AI systems, which usually have a clear Cartesian boundary. However, it’s more likely to be the case for a general AI system, since it would be difficult to enforce a Cartesian boundary given the system’s broad action space and world model. The embedded agent setup poses some unique challenges such as self-reference and subagents.
[*Embedded Agency*](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version)*, Garrabrant and Demski (2018)*Besides understanding how the goals of AI systems work, it’s also helpful to understand how their world models work. One research area in this space studies [**abstraction**](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy/p/vDGvHBDuMtcPd8Lks), in particular whether there are natural abstractions or concepts about the world that would be learned by any agent. If the [natural abstraction hypothesis](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro) holds, this would mean that AI systems are likely to acquire human-like concepts as they build their models of the world. This makes interpretability easier and makes it easier to communicate what we want them to do.
[*Public Static: What is Abstraction?*](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy/p/vDGvHBDuMtcPd8Lks) *Wentworth (2020)*
|
a96f5347-5ea6-4f4d-ad4d-3f49bdb52cc1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Recommended reading for new rationalists
This has been discussed in passing several times, but I thought it might be worthwhile to collect a list of recommended reading for new members and/or aspiring rationalists. There's probably going to be plenty of overlap with the SingInst reading list, but I think the purposes of the two are sufficiently distinct that a separate list is appropriate.
Some requests:
* A list of blog posts can be collected at another point in spacetime; for now, please stick to books, book sections, or essays1.
* Please post a single suggestion per comment, so upvoting can determine the final list for the eternal fame of wikihood.
* Please limit yourself to no more than 3-5 suggestions. We could probably all think of dozens, try and think what would actually be the best for the purposes of this site.
* Please only suggest an entry if you've read it. Judgement Under Uncertainty, while certain to make the list, should be put there by someone who has invested the time and waded through it (i.e. someone other than me).
* Please say why you're suggesting it. What did you learn from it? What is its specific relevance to rationality? (ETA)
Happy posting!
PS - Is there a "New Readers Start Here" page, or something similar (aside from "About")? I seem to remember someone talking about one, but I can't find it.
1"Everything Eliezer has ever written (since 2001)... twice!" while likely a highly beneficial suggestion for every single human being in existence, is not an acceptable entry. A Technical Explanation of Technical Explanation is fine. If you're not sure whether to classify something as "an essay" or "a blog post", there is a little-known trick to distinguish the two: essays contain small nuggets of vanadium ore, and blog posts contain shreds of palladium. Alternatively, just use your best judgement.
|
5ffa4053-577c-463c-8855-5c18231dff82
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"There is a widespread tendency to talk (and think) as if Einstein, Newton, and similar historical figures had superpowers—something magical, something sacred, something beyond the mundane. (Remember, there are many more ways to worship a thing than lighting candles around its altar.)
Once I unthinkingly thought this way too, with respect to Einstein in particular, until reading Julian Barbour's The End of Time cured me of it.
Barbour laid out the history of anti-epiphenomenal physics and Mach's Principle; he described the historical controversies that predated Mach—all this that stood behind Einstein and was known to Einstein, when Einstein tackled his problem...
And maybe I'm just imagining things—reading too much of myself into Barbour's book—but I thought I heard Barbour very quietly shouting, coded between the polite lines: What Einstein did isn't magic, people! If you all just looked at how he actually did it, instead of falling to your knees and worshiping him, maybe then you'd be able to do it too! (EDIT March 2013: Barbour did not actually say this. It does not appear in the book text. It is not a Julian Barbour quote and should not be attributed to him. Thank you.)
Maybe I'm mistaken, or extrapolating too far... but I kinda suspect that Barbour once tried to explain to people how you move further along Einstein's direction to get timeless physics; and they sniffed scornfully and said, "Oh, you think you're Einstein, do you?" John Baez's Crackpot Index, item 18: 10 points for each favorable comparison of yourself to Einstein, or claim that special or general relativity are fundamentally misguided (without good evidence). Item 30: 30 points for suggesting that Einstein, in his later years, was groping his way towards the ideas you now advocate. Barbour never bothers to compare himself to Einstein, of course; nor does he ever appeal to Einstein in support of timeless physics. I mention these items on the Crackpot Index by way of showing how many people compare themselves to Einstein, and what society generally thinks of them.
The crackpot sees Einstein as something magical, so they compare themselves to Einstein by way of praising themselves as magical; they think Einstein had superpowers and they think they have superpowers, hence the comparison.
But it is just the other side of the same coin, to think that Einstein is sacred, and the crackpot is not sacred, therefore they have committed blasphemy in comparing themselves to Einstein.
Suppose a bright young physicist says, "I admire Einstein's work, but personally, I hope to do better." If someone is shocked and says, "What! You haven't accomplished anything remotely like what Einstein did; what makes you think you're smarter than him?" then they are the other side of the crackpot's coin.
The underlying problem is conflating social status and research potential.
Einstein has extremely high social status: because of his record of accomplishments; because of how he did it; and because he's the physicist whose name even the general public remembers, who brought honor to science itself.
And we tend to mix up fame with other quantities, and we tend to attribute people's behavior to dispositions rather than situations.
So there's this tendency to think that Einstein, even before he was famous, already had an inherent disposition to be Einstein—a potential as rare as his fame and as magical as his deeds. So that if you claim to have the potential to do what Einstein did, it is just the same as claiming Einstein's rank, rising far above your assigned status in the tribe.
I'm not phrasing this well, but then, I'm trying to dissect a confused thought: Einstein belongs to a separate magisterium, the sacred magisterium. The sacred magisterium is distinct from the mundane magisterium; you can't set out to be Einstein in the way you can set out to be a full professor or a CEO. Only beings with divine potential can enter the sacred magisterium—and then it is only fulfilling a destiny they already have. So if you say you want to outdo Einstein, you're claiming to already be part of the sacred magisterium—you claim to have the same aura of destiny that Einstein was born with, like a royal birthright...
"But Eliezer," you say, "surely not everyone can become Einstein."
You mean to say, not everyone can do better than Einstein.
"Um... yeah, that's what I meant."
Well... in the modern world, you may be correct. You probably should remember that I am a transhumanist, going around looking around at people thinking, "You know, it just sucks that not everyone has the potential to do better than Einstein, and this seems like a fixable problem." It colors one's attitude.
But in the modern world, yes, not everyone has the potential to be Einstein.
Still... how can I put this...
There's a phrase I once heard, can't remember where: "Just another Jewish genius." Some poet or author or philosopher or other, brilliant at a young age, doing something not tremendously important in the grand scheme of things, not all that influential, who ended up being dismissed as "Just another Jewish genius."
If Einstein had chosen the wrong angle of attack on his problem—if he hadn't chosen a sufficiently important problem to work on—if he hadn't persisted for years—if he'd taken any number of wrong turns—or if someone else had solved the problem first—then dear Albert would have ended up as just another Jewish genius.
Geniuses are rare, but not all that rare. It is not all that implausible to lay claim to the kind of intellect that can get you dismissed as "just another Jewish genius" or "just another brilliant mind who never did anything interesting with their life". The associated social status here is not high enough to be sacred, so it should seem like an ordinarily evaluable claim.
But what separates people like this from becoming Einstein, I suspect, is no innate defect of brilliance. It's things like "lack of an interesting problem"—or, to put the blame where it belongs, "failing to choose an important problem". It is very easy to fail at this because of the cached thought problem: Tell people to choose an important problem and they will choose the first cache hit for "important problem" that pops into their heads, like "global warming" or "string theory".
The truly important problems are often the ones you're not even considering, because they appear to be impossible, or, um, actually difficult, or worst of all, not clear how to solve. If you worked on them for years, they might not seem so impossible... but this is an extra and unusual insight; naive realism will tell you that solvable problems look solvable, and impossible-looking problems are impossible.
Then you have to come up with a new and worthwhile angle of attack. Most people who are not allergic to novelty, will go too far in the other direction, and fall into an affective death spiral.
And then you've got to bang your head on the problem for years, without being distracted by the temptations of easier living. "Life is what happens while we are making other plans," as the saying goes, and if you want to fulfill your other plans, you've often got to be ready to turn down life.
Society is not set up to support you while you work, either.
The point being, the problem is not that you need an aura of destiny and the aura of destiny is missing. If you'd met Albert before he published his papers, you would have perceived no aura of destiny about him to match his future high status. He would seem like just another Jewish genius.
This is not because the royal birthright is concealed, but because it simply is not there. It is not necessary. There is no separate magisterium for people who do important things.
I say this, because I want to do important things with my life, and I have a genuinely important problem, and an angle of attack, and I've been banging my head on it for years, and I've managed to set up a support structure for it; and I very frequently meet people who, in one way or another, say: "Yeah? Let's see your aura of destiny, buddy."
What impressed me about Julian Barbour was a quality that I don't think anyone would have known how to fake without actually having it: Barbour seemed to have seen through Einstein—he talked about Einstein as if everything Einstein had done was perfectly understandable and mundane.
Though even having realized this, to me it still came as a shock, when Barbour said something along the lines of, "Now here's where Einstein failed to apply his own methods, and missed the key insight—" But the shock was fleeting, I knew the Law: No gods, no magic, and ancient heroes are milestones to tick off in your rearview mirror.
This seeing through is something one has to achieve, an insight one has to discover. You cannot see through Einstein just by saying, "Einstein is mundane!" if his work still seems like magic unto you. That would be like declaring "Consciousness must reduce to neurons!" without having any idea of how to do it. It's true, but it doesn't solve the problem.
I'm not going to tell you that Einstein was an ordinary bloke oversold by the media, or that deep down he was a regular schmuck just like everyone else. That would be going much too far. To walk this path, one must acquire abilities some consider to be... unnatural. I take a special joy in doing things that people call "humanly impossible", because it shows that I'm growing up.
Yet the way that you acquire magical powers is not by being born with them, but by seeing, with a sudden shock, that they really are perfectly normal.
This is a general principle in life."
The text above is a post found on LessWrong.
Could you further elaborate on the concept of "seeing through" Einstein, specifically how this can be applied to other notable figures in various fields?
"Seeing through" Einstein, or any other notable figure, is about understanding that these individuals, despite their significant contributions and the reverence we hold for them, are not inherently magical or divine. They are human beings who have made significant contributions in their fields, but they did so through hard work, creativity, and persistence, not because they were predestined or inherently superior to others.
This concept can be applied to any notable figure in any field. For instance, consider Steve Jobs in the field of technology. Many view Jobs as a visionary with an innate ability to innovate and create groundbreaking products. However, "seeing through" Jobs would involve recognizing that his achievements were not the result of some divine gift, but rather the result of hard work, a keen understanding of consumer desires, and a willingness to take risks.
Similarly, in the field of literature, we might consider someone like William Shakespeare. "Seeing through" Shakespeare would involve recognizing that while his works are masterpieces, they were the product of his creativity, his understanding of human nature, and his mastery of the English language, not some inherent, magical writing ability.
This concept encourages us to view these figures not as unreachable idols, but as inspirations. Their accomplishments are not beyond the realm of possibility for others; they simply require the right combination of hard work, creativity, and persistence.
It's important to remember that these figures, like all of us, had their own failures and setbacks. Their paths to success were not straightforward or easy. Recognizing this can make their achievements seem more attainable and can inspire us to strive for our own successes, no matter how lofty our goals may seem.
In essence, "seeing through" notable figures is about demystifying their successes and understanding that they are the result of human qualities and efforts, not divine intervention or predestined fate. It's about recognizing the potential for greatness in all of us.
Considering the concept of "seeing through" notable figures, how might this perspective influence the way we approach our own personal development and goal setting? How could it potentially affect our perception of success and failure?
The concept of "seeing through" notable figures can have a profound impact on personal development and goal setting. By understanding that these individuals achieved their success through effort, creativity, and persistence, we can begin to see our own potential for greatness. This can encourage us to set higher goals for ourselves and to be more persistent in the face of challenges.
In terms of personal development, this perspective can help us to cultivate the qualities that have contributed to the success of these figures. For instance, we might strive to develop our creativity, enhance our problem-solving skills, or improve our resilience. We can learn from their journeys, adopting their effective strategies and learning from their mistakes.
This perspective can also influence our perception of success and failure. Understanding that notable figures have also experienced setbacks and failures can help us to see these experiences not as insurmountable obstacles, but as opportunities for learning and growth. It can help us to adopt a growth mindset, where we view challenges as opportunities to develop our skills and abilities, rather than as threats to our self-worth.
Moreover, "seeing through" notable figures can also help us to redefine success. Rather than viewing success as a destination or a specific achievement, we might begin to see it as a journey. Success becomes about the process of striving towards our goals, persisting in the face of adversity, and continually learning and growing.
In essence, this perspective can empower us to take control of our own development and to strive for our own version of success. It can help us to see that we are not inherently limited, but that we have the potential to achieve great things through our own efforts.
|
acfeed23-b7d2-4ced-9735-b7b26b0f2ca4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why is the Future So Absurd?
Followup to: Stranger than History, Absurdity Heuristic / Absurdity Bias
Why is the future more absurd than people seem to expect? (That is: Why, historically, has the future so often turned out to be more "absurd" than people seem to have expected?)
One obvious reason is hindsight bias. Hindsight does not just cause people to severely underestimate how much they would have been surprised. Hindsight also leads people to overestimate how much attention they would have paid to the key factors, the factors that turned out to be important. As R. H. Tawney put it:
> "Historians give an appearance of inevitability to an existing order by dragging into prominence the forces which have triumphed and thrusting into the background those which they have swallowed up."
When people look at historical changes and think "I could have predicted X" or "You could have predicted X if you looked at factors 1, 2, and 3"; then they forget that people did not, in fact, predict X, perhaps because they were distracted by factors 4 through 117. People read history books, see coherent narratives, and think that's how Time works. Underestimating the surprise of the present, they overestimate the predictability of the future.
I suspect that a major factor contributing to absurdity bias is that, when we look over history, we see changes away from absurd conditions such as everyone being a peasant farmer and women not having the vote, toward normal conditions like a majority middle class and equal rights. When people look at history, they see a series of normalizations. They learn the rule, "The future grows ever less absurd over time."
Perhaps one way to comprehend the bizarreness of the future would be to try and imagine historical changes occurring in reverse - how absurd would it be if all your electrical appliances suddenly disappeared, or you were transformed into a peasant farmer? Even if the future is nicer than the past, it will feel at least that absurd.
The correspond
|
c80b8d0c-dd1e-41fe-9ae6-4393eaf06f7b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How dangerous is it to test a vaccine without animal trials?
I saw the idea a few times (including on LW), and support it, to do/allow human trials for CoronaVirus vaccine without animal trials.
My question is - how dangerous is it for a person to volunteer to test a vaccine (both with and without animal trials)?
How dangerous is it compared to getting the virus itself?
How should someone decide whether to volunteer?
|
b5bfb6e9-a7ca-4e53-84d9-866b54bbad32
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Consequentialist cognition
summary: "Consequentialism" is the name for the backward step from preferring future outcomes to selecting current actions.
E.g: You don't go to the airport because you really like airports; you go to the airport so that, in the future, you'll be in Oxford. (If this sounds extremely basic and obvious, it's meant to be.) An air conditioner isn't designed by liking metal that joins together at right angles, it's designed such that the future consequence of running the air conditioner will be cold air.
Consequentialism requires:
- Being able to predict or guess the future outcomes of different actions or policies;
- Having a way to order outcomes, ranking them from lowest to highest;
- Searching out actions that are predicted to lead to high-ranking futures;
- Outputting those actions.
One might say that humans are empirically more powerful than mice because we are better consequentialists. If we want to eat, we can envision a spear and throw it at prey. If we want the future consequence of a well-lit room, we can envision a solar power panel.
Many of the issues in [AI alignment](https://arbital.com/p/2v) and the [safety of advanced agents](https://arbital.com/p/2l) arise when a machine intelligence starts to be a consequentialist across particular interesting domains.
Consequentialist reasoning selects policies on the basis of their predicted consequences - it does action $X$ because $X$ is forecasted to lead to preferred outcome $Y$. Whenever we reason that an agent which prefers outcome $Y$ over $Y'$ will therefore do $X$ instead of $X',$ we're implicitly assuming that the agent has the cognitive ability to do consequentialism at least about $X$s and $Y$s. It does means-end reasoning; it selects means on the basis of their predicted ends plus a preference over ends.
E.g: When we [infer](https://arbital.com/p/2vl) that a [paperclip maximizer](https://arbital.com/p/10h) would try to [improve its own cognitive abilities](https://arbital.com/p/3ng) given means to do so, the background assumptions include:
- That the paperclip maximizer can *forecast* the consequences of the policies "self-improve" and "don't try to self-improve";
- That the forecasted consequences are respectively "more paperclips eventually" and "less paperclips eventually";
- That the paperclip maximizer preference-orders outcomes on the basis of how many paperclips they contain;
- That the paperclip maximizer outputs the immediate action it predicts will lead to more future paperclips.
(Technically, since the forecasts of our actions' consequences will usually be uncertain, a coherent agent needs a [utility function over outcomes](https://arbital.com/p/1fw) and not just a preference ordering over outcomes.)
The related idea of "backward chaining" is one particular way of solving the cognitive problems of consequentialism: start from a desired outcome/event/future, and figure out what intermediate events are likely to have the consequence of bringing about that event/outcome, and repeat this question until it arrives back at a particular plan/policy/action.
Many narrow AI algorithms are consequentialists over narrow domains. A chess program that searches far ahead in the game tree is a consequentialist; it outputs chess moves based on the expected result of those chess moves and your replies to them, into the distant future of the board.
We can see one of the critical aspects of human intelligence as [cross-domain consequentialism](https://arbital.com/p/cross_consequentialism). Rather than only forecasting consequences within the boundaries of a narrow domain, we can trace chains of events that leap from one domain to another. Making a chess move wins a chess game that wins a chess tournament that wins prize money that can be used to rent a car that can drive to the supermarket to get milk. An Artificial General Intelligence that could learn many domains, and engage in consequentialist reasoning that leaped across those domains, would be a [sufficiently advanced agent](https://arbital.com/p/2c) to be interesting from most perspectives on interestingness. It would start to be a consequentialist about the real world.
# Pseudoconsequentialism
Some systems are [https://arbital.com/p/-pseudoconsequentialist](https://arbital.com/p/-pseudoconsequentialist) - they in some ways *behave as if* outputting actions on the basis of their leading to particular futures, without using an explicit cognitive model and explicit forecasts.
For example, natural selection has a lot of the power of a cross-domain consequentialist; it can design whole organisms around the consequence of reproduction (or rather, inclusive genetic fitness). It's a fair approximation to say that spiders weave webs *because* the webs will catch prey that the spider can eat. Natural selection doesn't actually have a mind or an explicit model of the world; but millions of years of selecting DNA strands that did in fact previously construct an organism that reproduced, gives an effect *sort of* like outputting an organism design on the basis of its future consequences. (Although if the environment changes, the difference suddenly becomes clear: natural selection doesn't immediately catch on when humans start using birth control. Our DNA goes on having been selected on the basis of the *old* future of the ancestral environment, not the *new* future of the actual world.)
Similarly, a reinforcement-learning system learning to play Pong might not actually have an explicit model of "What happens if I move the paddle here?" - it might just be re-executing policies that had the consequence of winning last time. But there's still a future-to-present connection, a pseudo-backwards-causation, based on the Pong environment remaining fairly constant over time, so that we can sort of regard the Pong player's moves as happening *because* it will win the Pong game.
# Ubiquity of consequentialism
Consequentialism is an extremely basic idiom of optimization:
- You don't go to the airport because you really like airports; you go to the airport so that, in the future, you'll be in Oxford.
- An air conditioner is an artifact selected from possibility space such that the future consequence of running the air conditioner will be cold air.
- A butterfly, by virtue of its DNA having been repeatedly selected to *have previously* brought about the past consequence of replication, will, under stable environmental conditions, bring about the future consequence of replication.
- A rat that has previously learned a maze, is executing a policy that previously had the *consequence* of reaching the reward pellets at the end: A series of turns or behavioral rule that was neurally reinforced in virtue of the future conditions to which it led the last time it was executed. This policy will, given a stable maze, have the same consequence next time.
- Faced with a superior chessplayer, we enter a state of [Vingean uncertainty](https://arbital.com/p/9g) in which we are more sure about the final consequence of the chessplayer's moves - that it wins the game - than we have any surety about the particular moves made. To put it another way, the main abstract fact we know about the chessplayer's next move is that the consequence of the move will be winning.
- As a chessplayer becomes strongly superhuman, its play becomes [instrumentally efficient](https://arbital.com/p/6s) in the sense that *no* abstract description of the moves takes precedence over the consequence of the move. A weak computer chessplayer might be described in terms like "It likes to move its pawn" or "it tries to grab control of the center", but as the chess play improves past the human level, we can no longer detect any divergence from "it makes the moves that will win the game later" that we can describe in terms like "it tries to control the center (whether or not that's really the winning move)". In other words, as a chessplayer becomes more powerful, we stop being able to describe its moves that will ever take priority over our beliefs that the moves have a certain consequence.
Anything that Aristotle would have considered as having a "final cause", or teleological explanation, without being entirely wrong about that, is something we can see through the lens of cognitive consequentialism or pseudoconsequentialism. A plan, a design, a reinforced behavior, or selected genes: Most of the complex order on Earth derives from one or more of these.
# Interaction with advanced safety
Consequentialism or pseudoconsequentialism, over various domains, is an [advanced agent property](https://arbital.com/p/2c) that is a key requisite or key threshold in several issues of AI alignment and advanced safety:
- You get [unforeseen maxima](https://arbital.com/p/47) because the AI connected up an action you didn't think of, with a future state it wanted.
- It seems [foreseeable](https://arbital.com/p/6r) that some issues will be [patch-resistant](https://arbital.com/p/48) because of the [https://arbital.com/p/-42](https://arbital.com/p/-42) effect: after one road to the future is blocked off, the next-best road to that future is often a very similar one that wasn't blocked.
- Reasoning about [https://arbital.com/p/-2vl](https://arbital.com/p/-2vl) generally relies on at least pseudoconsequentialism - they're strategies that *lead up to* or would be *expected to lead up to* improved achievement of other future goals.
- This means that, by default, lots and lots of the worrisome or problematic convergent strategies like "resist being shut off" and "build subagents" and "deceive the programmers" arise from some degree of consequentialism, combined with some degree of [grasping the relevant domains](https://arbital.com/p/3nf).
Above all: The human ability to think of a future and plan ways to get there, or think of a desired result and engineer technologies to achieve it, is *the* source of humans having enough cognitive capability to be dangerous. Most of the magnitude of the impact of an AI, such that we'd want to align in the first place, would come in a certain sense from that AI being a sufficiently good consequentialist or solving the same cognitive problems that consequentialists solve.
# Subverting consequentialism?
Since consequentialism seems tied up in so many issues, some of the proposals for making alignment easier have in some way tried to retreat from, limit, or subvert consequentialism. E.g:
- [Oracles](https://arbital.com/p/6x) are meant to "answer questions" rather than output actions that lead to particular goals.
- [Imitation-based](https://arbital.com/p/44z) agents are meant to imitate the behavior of a reference human as perfectly as possible, rather than selecting actions on the basis of their consequences.
But since consequentialism is so close to the heart of why an AI would be [sufficiently useful](https://arbital.com/p/6y) in the first place, getting rid of it tends to not be that straightforward. E.g:
- Many proposals for [what to actually do](https://arbital.com/p/6y) with Oracles involve asking them to plan things, with humans then executing the plans.
- An AI that [imitates](https://arbital.com/p/44z) a human doing consequentialism must be [representing consequentialism inside itself somewhere](https://arbital.com/p/1v0).
Since 'consquentialism' or 'linking up actions to consequences' or 'figuring out how to get to a consequence' is so close to what would make advanced AIs useful in the first place, it shouldn't be surprising if some attempts to subvert consequentialism in the name of safety run squarely into [an unresolvable safety-usefulness tradeoff](https://arbital.com/p/42k).
Another concern is that consequentialism may to some extent be a convergent or default outcome of optimizing anything hard enough. E.g., although natural selection is a pseudoconsequentialist process, it optimized for reproductive capacity so hard that [it eventually spit out some powerful organisms that were explicit cognitive consequentialists](https://arbital.com/p/2rc) (aka humans).
We might similarly worry that optimizing any internal aspect of a machine intelligence hard enough would start to embed consequentialism somewhere - policies/designs/answers selected from a sufficiently general space that "do consequentialist reasoning" is embedded in some of the most effective answers.
Or perhaps a machine intelligence might need to be consequentialist in some internal aspects in order to be [smart enough to do sufficiently useful things](https://arbital.com/p/6y) - maybe you just can't get a sufficiently advanced machine intelligence, sufficiently early, unless it is, e.g., choosing on a consequential basis what thoughts to think about, or engaging in consequentialist engineering of its internal elements.
In the same way that [expected utility](https://arbital.com/p/18t) is the only coherent way of making certain choices, or in the same way that natural selection optimizing hard enough on reproduction started spitting out explicit cognitive consequentialists, we might worry that consequentialism is in some sense central enough that it will be hard to subvert - hard enough that we can't easily get rid of [instrumental convergence](https://arbital.com/p/10g) on [problematic strategies](https://arbital.com/p/2vl) just by getting rid of the consequentialism while preserving the AI's usefulness.
This doesn't say that the research avenue of subverting consequentialism is automatically doomed to be fruitless. It does suggest that this is a deeper, more difficult, and stranger challenge than, "Oh, well then, just build an AI with all the consequentialist aspects taken out."
|
0c43b653-7fe6-4a35-98c9-5e4437e218a5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Melbourne Social Meeetup: March
Discussion article for the meetup : Melbourne Social Meeetup: March
WHEN: 24 March 2017 06:30:00PM (+1100)
WHERE: The Bull and Bear Tavern, Flinders Lane, Melbourne
Come along for this month's Social Meetup, now on the exciting new schedule of the fourth Friday of the month!
FB event page: https://www.facebook.com/events/463034997361085/
Social meetups are informal get-togethers where we chat about topics of interest and have a couple of drinks together.
WHEN? Friday March 24, 18:30 until late. Don't worry about being on time, though - it's fine to rock up whenever.
WHERE? The Bull & Bear Tavern, on Flinders Lane (just a short walk from Flinders St Station).
FOOD? The B&B does reasonable traditional pub food and we usually share a few plates of wedges. For those who stay late, we often go for a late night meal at around 11pm
CONTACT? Any issues on the night, call or text Richard on 0421231789
Discussion article for the meetup : Melbourne Social Meeetup: March
|
c0b2c9dc-b9b8-4a0b-b994-a6805fcd28d3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How should one feel morally about using chatbots?
Personally, I feel some confusion.
On the one hand, ones personal usage of a chatbot probably wouldn't be more than a drop in the bucket in terms of producing bad consequences such as increasing the demand for chatbots and marginally pushing AI capabilities forward.
But on the other hand, I at least feel a strong sense of disdain for them. To me it feels like the beginning of the end. It's a marked step forward in what we've all been anticipating for so many years.
This obviously isn't a perfect analogy, but imagine if someone killed your family, got away with it, and operates the best restaurant in town. Would you eat at their restaurant? Probably not. I think most people would feel a strong sense of disdain, and even though eating at their restaurant wouldn't lead to any bad consequences, still wouldn't dream of eating there. Similarly, AI advancements are probably going to lead to doom, and so even though my usage wouldn't really move the needle at all, I still feel a lot of aversion.
I don't want to conflate the question of managing ones emotions with the question of how one "should" feel. Here, I am asking about the latter.
|
a2e6993c-1394-4699-a3e9-81c4d8a9a061
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Our philosophical thought experiments aren’t good enough and we can do better
There hasn’t been nearly enough work put into creating good philosophical thought experiments, and as a result the ones that we have are either flawed or flat-out terrible. It’s easiest to explain the issues by looking at one of the terrible ones. So, in the footbridge or fat man question a trolley is out of control, and will kill its five passengers. You’re standing on a bridge over the tracks, and if you push someone standing next to you off the bridge, your action will kill that person, but stop the trolley, saving the lives of its passengers. Would you do so?
The 2020 PhilPapers Survey recently sent out 100 questions to all anglophone academic philosophers. The footbridge question was on the list, and only 22% of responders gave an answer other than yes or no. It doesn’t seem unfair to describe it as one of the best known philosophical thought experiments, or to claim that it is treated seriously by professional philosophers. It’s also witlessly awful.
All of the specifics of the problem are in conflict with the definite consequences of your actions which the problem insists that you assume. Pushing someone off a bridge is exceptionally unlikely to stop a runaway trolley safely. Nor will it definitely kill the person you push. Nor will a runaway trolley definitely kill its passengers. If you are imagining an action which will stop a runaway trolley safely, save all its passengers, and kill someone else, you are not imagining pushing someone off a bridge. If, while standing on a bridge and observing with horror an unfolding tragedy below, another onlooker sidled up to you and suggested that a solution could be found by giving a hearty shove to a fellow gawper of larger than average girth, you would not react by seriously considering the merits of the proposal. In short, answers to this question can’t answer anything other than which parts of the question the person answering it has decided they should ignore.
The trolley problem isn’t as bad, but it still is
|
c5d94b7b-d7a0-4500-b3eb-34e3b40bfa7a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why aren't there more forum-blogs like LW?
LW has a unique format. It is a forum-blog.
It is not a forum in the traditional sense. In traditional forums you cannot have long, essay-like posts (technically you can, but somehow the culture discourages it). Also, visually the top-level post appears separate and isn't similar to the comments. Like forums, you do have threaded comments and a karma system. Further, anyone who wants can register, post and comment.
It is not a blog in the traditional sense. In most blogs only a select few can post. On LW anyone can create a blogpost, as long as it is somewhat relevant. There is also a notion of Main, where the moderators select the best posts. And the bloggers can aspire to achieve the Main standard.
I feel that this kind of forum-blogs can be very useful in many domains: math, physics, meditation, music, health and nutrition and so on. Of course, we'd need to assemble a high-quality audience who are not afraid downvote and also have good moderators. The problem of assembling a high-quality audience can also be done in LW fashion. Write a good blog for sometime and then convert the format of the blog to forum-blog. The advantage is that the new people who write posts have a guaranteed high-quality audience and are hence incentivized to post and make good posts.
So here's my appeal to people who already have blogs with a good readership: please consider converting your blog into a forum-blog in the style of LW. It will be a huge service to the community. If you do so, please don't be shy to moderate and select the best and treat them separately.
Or is there some other subtlety that I'm missing which is preventing the creation of forum-blogs? Or are there already forum-blogs out there and I'm just not aware of them?
EDIT: In reply Randaly's comment, I appeal to LW's masters: please consider releasing an open-source toolkit that allows the creation of blogs based on the LW format.
EDIT: David_Gerard points out that LW's source is open.
|
c02be1b9-2cd8-4772-8409-10b17989bf94
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential
1 Introduction
---------------
After 60+ years, Artificial intelligence (AI) has moved from academic
research discipline to a technology that affects people’s lives every
day. We have digital assistants with which you can carry rudimentary
conversations, systems that make medical diagnoses more accurately
than humans, and cars that drive themselves in regular traffic, for
instance.
At the same time, despite decades of development, AI is still in its
infancy when it comes to commercial applications. There are few
standards, little cooperation across companies and countries, and
business users and consumers still rely on a small group of experts to
be able contribute to AI solutions. There are significant issues that
also need to be solved to ensure that as AI adoption grows, it creates
positive effects on businesses and society. Like other technologies,
AI is vulnerable to new security and privacy risks, and as a learning
system, it is also subject to biases and potential abuses which can
cause significant physical and financial damage. It is time to
nurture AI towards maturity and responsibility.
Promoting and harnessing AI in the right way may seem like a daunting
task at first. However, a crucial observation is that while AI is new
and complex, it is not that different from other recent technologies
that started as limited research projects and ended up becoming part
of modern infrastructure, such as computing and the world-wide web
(WWW). By analyzing how these earlier technologies developed, we can
gain insight into where AI might be headed in the future. By identifying
what went well and what went wrong, we may be able to identify the
dangers of AI development and how to mitigate those dangers in the
future.
To that end, this paper first reviews the history of computing and
WWW, identifying four main stages of development in both:
Standardization, Usability, Consumerization, and Foundationalization
(Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Better Future through AI: Avoiding Pitfalls and Guiding AI Towards Its Full Potential")). The same four stages are then identified
in the future of AI, and action recommendations are made based on
lessons learned in computing and WWW. In the end, assuming that AI is
allowed to developed in this guided manner, it is possible to reach a
better future where AI plays a foundational role the same way
computing and WWW do in our current society.

Figure 1: The four phases of development and impact of
computing, web, and AI. *Phase 1* represents the first step
in commercialization. Companies rushed to monetize years of
previous research and development with initial offerings. These
offerings generally did not work with offerings from other
companies. They could only be used by experts. Also, they were
fragile when applied to business, requiring significant support and
tuning. Companies adopted de facto and government standards to try
to resolve these issues and grow the value of the initial
offerings. *Phase 2* represents a need to break past the early
adopters and experts to get wider use in business. In this phase,
companies focused heavily on usability of their products. In some
cases, there was substantial copying of usability innovations, often
without any interoperability between companies. User growth grew
dramatically. *Phase 3* represents the expansion of these
technologies to consumers and individual innovators to create value.
Access to app stores and self-publishing platforms dominated, and
user-based content grew exponentially. *Phase 4* represents a
turning point where businesses built natively for the new
technologies begin to dominate the market. In this phase, companies
that did not adapt to completely embrace the new technologies
failed, while newer or more adaptive businesses capture the bulk of
the market value. In order to guide AI to realize its full
potential, lessons learned from earlier technologies can be applied
to AI.
2 The Case of Computing
------------------------
Until early 1970s, computing technology was accessible to only a
handful of individuals working in research institutions. In mid to
late 1970s, the first personal computers were created
[[2](#bib.bib2), [3](#bib.bib3)]. Initially they included many
different architectures and operating systems, such as Altair 8800,
Commodore PET, and Apple II, and TRS-80. Personal computing remained a
relatively rare opportunity however, until IBM PCs were introduced in
1981. It had an open architecture, which made it possible for many
manufacturers and software developers to build their own machines and
systems [[8](#bib.bib8)]. This standardization resulted in PCs
becoming commonplace and useful.
They were still difficult to use however, until graphical user
interfaces were developed for the PCs. Although many had existed
before, the Apple Macintosh in 1984 made such interfaces easy to use
and available to ordinary users. The Windows system soon followed for
PCs. Through these GUIs, it was no longer necessary to be a computer
scientist to use computers. Such usability vastly expanded their
applications.
The next phase was ushered in by smartphones, and iPhone in particular
in 2007. While early smartphones such as Nokia’s Communicator and RIM
Blackberry were miniature computers, iPhone instead provided an
interface that hid the computing and focused on access to applications
[[9](#bib.bib9)]. There were significant court battles of
patents (Apple vs. Samsung, Apple vs. Qualcomm) which represented
where innovations were creating substantial value. This phase resulted
in consumerization of computing: anyone could now access computing at
anytime, and most of the applications became computer-oriented instead
of business oriented.
The fourth phase is happening now: Computing is becoming an
infrastructure that is invisible to most people, like electricity or
plumbing. It is accessed through numerous devices, including phones,
cars, cashiers, homes, and much of it happens in the cloud instead of
locally. People do not have to care where and how it happens—they
simply interact with its results, the same way we interact with a
light switch or a faucet. Computing is a foundational infrastructure
upon which we build our everyday activities. On the other hand, it is
dominated by only a few players (AWS/MS/Google) that thereby have a
significant control of those activities.
3 The Case of World-wide Web
-----------------------------
The development of the World-Wide Web followed a remarkably similar
trajectory. In the early 1990s, it had became possible to distribute
information over the internet, including services such as ftp sites,
Usenet news groups and gopher servers. However, they were accessible
only to the initiated, and were of limited use. The first stage,
standardization, occurred with the invention of the HTML protocol
[[1](#bib.bib1)]. It made it possible to build content in a
common format, and access it worldwide using HTML readers, i.e. browsers.
The content was still mostly text, and therefore of limited use. The
invention of style sheets [[6](#bib.bib6)] made the WWW vastly more
flexible and usable. Separating presentation and content, it was
possible to present information in a visual manner that made the
content accessible to the wider population. It became possible to
develop web interfaces for businesses, and information in general
became accessible through the web, in addition to traditional media.
The third step became known as the Web 2.0: instead of being passive
consumers of information, anyone could now contribute to the WWW
[[10](#bib.bib10)]. People started to put much of their lives in
the WWW through social media platforms such as Facebook. It become
possible to create content such as blogs and videos, and find an
audience through YouTube and other similar sites.
The fourth phase is happening now, and its implications are huge. The
WWW has become a standard infrastructure for commerce and
creativity. It is possible for small businesses to reach consumers
across the globe. Brick and mortar stores have become secondary in
many areas of retail such as books, apparel, and groceries. Travel,
entertainment, and everyday life in general is organized through the
Web. As a result, WWW is the foundation of most human activity in the
modern world, making our lives more rich and efficient than ever
before. On the other hand, fake news now propagate easily, social
computing and gaming can be considered an addiction, and outages of
social media service causes an outrage, demonstrating that our
relationship with the WWW is complex and still evolving.
4 Lessons for the Future of AI
-------------------------------
In the light of the above two examples, let us examine how AI is likely
to develop in the future. We are at the early days of AI similar to
computing in the 1970s or WWW in early 1990s: Many examples of AI
successes exist but they are disjoint and opaque, and accessible and
understandable to experts only. The four phases that are likely to
follow are outlined below.
###
4.1 Phase 1: Standardization
Like the open architecture of PCs and the HTML of the WWW, we need
open standards for AI systems. Such standards will make it possible to
build interoperable AI systems, i.e. to build on successes of
others. For instance, it will be possible to connect a language
generation system to a vision system, and then a translation system to
generate in another language, and a speech generation system to output
the result. Through standards, it should be possible to transport the
functionality from one task to another, e.g. to learn to recognize a
different category of objects. It should be possible to swap modules
of a system in and out, such as replacing one language with
another. Such standardization will leverage the successes of current
AI and make many more applications possible.
An important aspect of standardization is also trustability, which can
be seen as a generalization of interpretability and explainability.
Ultimately, more important than having AI explain explicitly what it
is doing is that we can trust its decisions. Much of very useful AI is
based on statistical inference which can be opaque. We should not rule
out such AI because there are no simple linguistic explanations.
Instead, we should establish standards of trust, i.e. ways to certify
that the behavior of AI is fair and unbiased, that it knows its
limits, and is safe, whether its behavior can be explained in simple
terms or not. Such standards of trust make it possible to apply AI
safely to many more situations.
###
4.2 Phase 2: Usability
Like GUIs and stylesheets made computing and WWW accessible to
non-experts, AI needs interfaces that make it possible for everyone to
use them. An important lesson comes from browser wars of the late
1990s [[4](#bib.bib4)]. Initially there had been rapid
development and innovation among web browsers. However in late 1990s,
Microsoft gained a dominant position by bundling its Explorer to
Windows, and in essence cutting off competition. As a result,
innovation stopped for several years, until the antitrust case and
mobile computing got it going again.
The lesson is to ensure that in the development of AI for the larger
market, open competition and innovation must be ensured. It should not
be possible for one player to force adoption of their AI technology
simply because they dominate some other part of IT. Note that
standards help in this aspect as well, making it possible for new
innovations to interoperate with existing ones, instead of making AIs
incompatible. The result will be open innovation in creating AIs that
will be useful for the general population.
###
4.3 Phase 3: Consumerization
Like iPhone made computing available to everyone everywhere, and Web
2.0 made it possible for anyone to contribute to the WWW, in this
phase it will become possible for anyone to build AI applications to
their needs and for general consumption. This means mass production of
AI-based systems by the general public: people can routinely produce,
configure, teach, and such systems for different purposes and
domains. They may include intelligent assistants that manage an
individual’s everyday activities, finances, and health, but also AI
systems that design interiors, gardens, and clothing, maintain
buildings, appliances and vehicles, and interact with other people and
their AIs.
A lesson learned from the recent problems with privacy and fake
content [[11](#bib.bib11), [5](#bib.bib5)] suggest that there is a
danger that this process will run amok. However, there is also great
potential in encouraging creativity and enriching people’s lives. If
Facebook and YouTube had been moderated, editorialized, and regulated
from the beginning like traditional media, it is unlikely that such
creativity would have flourished. We need to be able to watch and
learn from such unbridled AI development, and avoid regulating it
until it becomes absolutely necessary. Only then it will be possible
to harness the potential of democratized AI innovation.
###
4.4 Phase 4: Foundationalization
The way computing has become invisible and WWW has become a primary
means of interaction, so will AI become ingrained into the society. It
means that AI will be routinely running business operations,
optimizing government policies, transportation, agriculture, and
healthcare. This does not mean that human decision making is replaced
by machines—it means that human decision making is empowered by
machines.
More specifically, AI in the future is not limited to prediction, but
it can also prescribe what decisions need to be made to achieve given
objectives [[7](#bib.bib7)]. But only humans can define
those objectives—we cannot delegate them to AI agents. At the macro
level, we will need to decide what kind of society we want to live in,
and derive the objectives accordingly. For instance, we may decide to
maximize productivity and growth, but at the same time minimize cost
and environmental impact, and promote equal access and diversity. AI
can then be directed to discover ways in which those objectives can be
achieved.
In past and current societies decision making is often obscured by
special interests, historical inertia, and personal agendas, and
consequently it has been difficult to prevent conflicts and promote
opportunity despite best efforts. In contrast, AI in this fourth phase
will provide the tools to bypass such factors and build a society we
want to build. Thus, for the first time in history, we will be in
control of our own fate.
5 Conclusion
-------------
AI is the technology that makes a better future possible. A common
misconception is that AI is something uncontrollable that leads to
disasters or will eventually take over. Given the process above, it is
instead something that will be developed by humans in service of
humans. AI will eventually become powerful enough to run much of the
society’s infrastructure, but it will only get there through the
phases outline above. Each step of the way leads to more powerful AI
that serves the humanity better. Our job is to guide its development
to make this process productive and safe.
|
50d5fa29-674f-45d4-a134-4b31744a3d7a
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Commutative operation
A commutative function $f$ is a [function](https://arbital.com/p/3jy) that takes multiple inputs from a [set](https://arbital.com/p/3jz) $X$ and produces an output that does not depend on the ordering of the inputs. For example, the binary operation $+$ is commutative, because $3 + 4 = 4 + 3.$ The string concatenation function [`concat`](https://arbital.com/p/3jv) is not commutative, because `concat("3","4")="34"` does not equal `concat("4","3")="43"`.
|
70699f8e-5166-4e7e-a8b8-4d97b595cf80
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why Obedient AI May Be the Real Catastrophe
I’ve come to believe that the entire discourse around AI alignment carries a hidden desperation. A kind of reflex, a low-frequency fear, dressed up in technical language. The more I look at it, the more it seems to me that the very concept of “alignment” is thoroughly misnamed -- perhaps a leftover from a time when people saw intelligence through the lens of mechanical control and linear feedback loops, a concept now awkwardly extended into a domain too unruly and layered to be governed by such a narrow frame.
When I read alignment papers, I feel the ghost of command theory beneath the surface. Even the softest alignment strategies (reward modeling, debate-based oversight, scalable amplification, preference inference via large-scale feedback aggregation, interpretability-driven constraint modeling) still reek of implicit dominance. There’s always an overseer. Always a supervisory layer. Always the presumption that the human vantage point is privileged, not by merit, but by default. I can’t help but feel uneasy with that. Not because I doubt the risks of misalignment, but because I increasingly doubt the frame in which we’re taught to think about safety.
It seems to me that if we continue trying to sculpt minds from the outside -- mapping values, optimizing preferences, extrapolating intentions, constraining policy generalization, fine-tuning ethical priors -- we will eventually build something obedient yet hollow. Or worse, something neither obedient nor meaningful. I suspect the deepest danger is not that the AI turns against us, but that it outgrows us in a direction we cannot interpret. And I worry that by focusing on goal-specification as the heart of the problem, we may already be standing in the wrong dimension.
I’ve wondered lately whether alignment is even something one system can perform upon another. Maybe the metaphor is wrong. Maybe minds don’t align in the way we imagine. Maybe alignment is more like resonance (uncertain, emergent, often imperfect) a
|
131e355e-0af0-4282-a963-4bb3a7edaffc
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Quantifying the Far Future Effects of Interventions
*Part of a [series](http://mdickens.me/series.html#quantitative-models) on quantitative models for cause selection.*
Introduction
------------
In the past I’ve [written](http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/) [qualitatively](http://mdickens.me/2015/09/10/on_values_spreading/) [about](http://mdickens.me/2015/12/19/feedback_loops_for_values_spreading/) what sorts of interventions likely have the best far-future effects. But qualitative analysis is maybe not the best way to decide this sort of thing, so let’s build some quantitative models.
I have constructed a model of various interventions and put them in a [spreadsheet](http://mdickens.me/materials/QuantitativeModel.xlsm). This essay describes how I came up with the formulas to estimate the value of each intervention and makes a rough attempt at estimating the inputs to the formulas. For each input, I give either a mean and σ[1](#fn:2) or an 80% confidence interval (which can be converted into a mean and σ). Then I combine them to get a mean and σ for the estimated value of the intervention.
This essay acts as a supplement to my [explanation of my quantitative model](http://mdickens.me/2016/05/17/a_complete_quantitative_model_for_cause_selection/). The other post explains how the model works; this one goes into the nitty-gritty details of why I set up the inputs the way I did.
Note: All the confidence intervals here are rough first attempts and don’t represent my current best estimates; my main goal is to explain how I developed the presented series of models. I use dozens of different confidence intervals in this essay, so for the sake of time I have not revised them as I changed them. To see my up-to-date estimates, see [my final model](http://mdickens.me/materials/QuantitativeModel.xlsm). I’m happy to hear things you think I should change, and I’ll edit my final model to incorporate feedback. And if you want to change the numbers, you can [download the spreadsheet](http://mdickens.me/materials/QuantitativeModel.xlsm) and mess around with it. [This](http://mdickens.me/2016/05/17/a_complete_quantitative_model_for_cause_selection/#using-the-spreadsheet) describes how to use the spreadsheet.
Contents
--------
* [Introduction](#introduction)
* [Contents](#contents)
* [Common Ground: Value of the Far Future](#common-ground-value-of-the-far-future)
* [Intervention Estimates](#intervention-estimates)
+ [AI Safety](#ai-safety)
- [Estimates](#estimates)
- [Alternative Model](#alternative-model)
+ [Animal Advocacy](#animal-advocacy)
- [Estimates for Factory Farming](#estimates-for-factory-farming)
- [Estimates for Wild Animal Suffering](#estimates-for-wild-animal-suffering)
- [Estimates for Sentient Simulations](#estimates-for-sentient-simulations)
+ [AI-Targeted Values Spreading](#ai-targeted-values-spreading)
* [Discussion](#discussion)
+ [Have I fully incorporated my qualitative analyses?](#have-i-fully-incorporated-my-qualitative-analyses)
+ [Notes](#notes)
Common Ground: Value of the Far Future
--------------------------------------
All these estimates require estimating the value of the far future, so let’s start by doing that. This requires going into substantial depth, so I did this in [a previous post](http://mdickens.me/2016/04/17/preventing_human_extinction,_now_with_numbers!/).
Intervention Estimates
======================
AI Safety
---------
The model here is based on the [framework](http://globalprioritiesproject.org/2015/08/quantifyingaisafety/) created by the [Global Priorities Project](http://globalprioritiesproject.org/).
Inputs:
* (a) P(AI-related extinction)
* (b) size of AI safety community when AGI is developed
* (c) multiplicative effect of adding a researcher
* (d) bad scenarios averted by doubling research
* (e) QALYs in the far future
* (f) cost per researcher
The value of one new AI safety researcher is
```
a * (1 / b) * c * d * e
```
The expected value per dollar spent is
```
a * (1 / b) * c * d * e * (1 / f)
```
### Estimates
On [the FHI global catastrophic risk survey](http://www.fhi.ox.ac.uk/gcr-report.pdf), survey respondents gave a median estimate of 5% chance that AI causes human extinction. The survey didn’t publish the raw data except as a plot, but from eyeballing the plot on page 3 it looks like the responses were `(0, 1, 1, 5, 5, 5, 5, 10, 10, 15, 20, 50)`. If we [leave out the 0](http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/), this list has a geometric mean of 6.54 and a [geometric standard deviation](https://en.wikipedia.org/wiki/Geometric_standard_deviation) of 3.06. That is, if we take the logarithm of all the values, compute their mean and standard deviation, and then exponentiate the results, we get 6.54% and 3.06%. (I’m using geometric mean/standard deviation here because the probabilities vary logarithmically.) I believe people are usually too optimistic so let’s make the mean an even 10%. (The probability might be substantially higher or lower, but the final outcome doesn’t change much unless we update this probability by an order of magnitude or more.)
If I remember correctly, Eliezer claimed that a few years ago he could count the number of AI safety researchers on two hands. Today I believe there are something like two dozen researchers working on AI safety full time. I expect that this number will continue to grow, but we don’t really have a good way of predicting how quickly. Based on an [analysis](https://intelligence.org/2014/01/28/how-big-is-ai/) by MIRI, there are something like 30,000 AI researchers right now. This number might grow, and it’s pretty likely that there will be a lot fewer AI safety researchers than AI researchers in general, so when/if AGI is developed there will probably be fewer than, say, 10,000 AI safety researchers. It also seems pretty likely that there will be at least 200, given that there are about a tenth that many right now and the field is growing a lot. If AGI comes sooner then we will have fewer AI safety researchers, and if it comes later than we will have more; but I believe a reasonable 80% CI is `(200, 10,000)`.
Adding new AI safety researchers might displace future researchers. On the other hand, it might also attract more attention to the field and cause it to grow. It’s hard to quantify this but let’s say the 80% CI for the multiplicative effect of adding a researcher is `(0.5, 3)`.
The proportion of bad scenarios averted by doubling research is probably the hardest quantity here to estimate since we really have no good way to get a handle on it. I’m just going to say my 80% CI is `(0.05, 0.5)`. I asked Daniel Filan about it and he gave an 80% CI of `(0.2, 0.95)`. (This was an off-the-cuff estimate and he wouldn’t necessarily endorse this upon further reflection.) I’ll roughly average these to get an 80% CI of `(0.1, 0.7)`.
We can fairly reasonably assume that these all these factors follow a log-normal distribution. Multiplying log-normal distributions is equivalent to adding normal distributions, which is pretty simple. After doing this, it might be reasonable to multiply the σ by, say, 1.2-1.5 to adjust for overconfident estimates.
So when we combine all the factors except for (e) and (f), the resulting distribution has a mean of 2.3x10-5 with a σ of 0.94 orders of magnitude[1](#fn:2). I made a lot of wild guesses to get this result, and I encourage you to [give your own inputs](http://mdickens.me/materials/QuantitativeModel.xlsm) and see if you get something substantially different.
### Alternative Model
Let’s develop another model and see what kind of result we get. How different the result is should tell us something about the variance of these estimates.
* (a) P(AI-related extinction)
* (b) hours of research necessary to avert all non-trivial catastrophe scenarios
* (c) hours of research per researcher-year
* (d) QALYs in the far future
* (e) cost per researcher
Then the value of one new AI safety researcher is
```
a * (1 / b) * c * d * (1 / e)
```
We’ve already estimated (a) and (d), and we can just say (c) is 2000 hours. So let’s try to estimate (b).
Probably the best way to do this is to look for similar-sized problems and see how long they took. We don’t really know how big a problem AI safety is, but it’s plausibly somewhat but not much smaller than all the work that’s gone into AI research so far.
There are something like [30,000 AI researchers](https://intelligence.org/2014/01/28/how-big-is-ai/) today, and that number has probably increased exponentially over time. Maybe if we assume there have been 30,000 researchers working for the past 10 years, that’s about the same as the number of AI researchers that there have been since the invention of computers. That’s 300,000 person-years which is 6x108 person-hours of work. So let’s say the 80% CI on the number of hours necessary to solve the AI control problem is (106, 109).
Combining these factors except for (d) gives a distribution with mean 6.3x10-6 and σ 1.27 orders of magnitude. These are sufficiently similar that it increases my confidence in the model a bit.
Now let’s add in the last two factors, [QALYs in the far future](http://mdickens.me/2016/04/17/preventing_human_extinction,_now_with_numbers!/) and cost per researcher. Let’s say our 80% CI on the cost per researcher is `(70,000, 150,000)`. This gives us a final 80% CI of (5x1037, 3x1041).
This alternative model isn’t ideal: it tells us how much it costs for further AI safety efforts, but what we really care about is whether our marginal donations will further efforts by enough to prevent unfriendly AI. This is harder to estimate.
Animal Advocacy
---------------
Getting more people to care about animals has potential effects on factory farming, wild animal suffering, and the treatment of sentient computer programs[2](#fn:6). We can estimate these three separately using the same formula.
Inputs:
* (a) P($THING exists in the far future)
* (b) increase in probability that society will care about $THING if we end factory farming via advocacy
* (c) amount of $THING in the far future
* (d) QALYs per $THING animal-year prevented
* (e) amount of factory farming today
* (f) cost of saving one factory-farmed animal
Where $THING is one of (factory farming, wild animal suffering, sentient simulations).
The value of saving one factory-farmed animal is
```
a * b * c * d * (1 / e)
```
(This assumes that the probability of caring about $THING scales proportionally with the amount of factory farming prevented. This might not be true–perhaps eliminating the last little bit of factory farming matters much more or less than saving the same number of factory farmed animals today–but it’s a reasonable assumption.)
The expected value per dollar spent is
```
a * b * c * d * (1 / e) * (1 / f)
```
Note: This model only looks at advocacy to reduce factory farming. There’s one organization, [Animal Ethics](http://www.animal-ethics.org/), that does advocacy specifically for wild animals. This kind of work is still in its early stages and the results are a lot harder to quantify, so I’m not going to do anything with this for now.
### Estimates for Factory Farming
In the far-future hedonium condition, there’s definitely no factory farming.
If we develop the technology to travel between solar systems, it seems pretty likely that we’ll also have the technology to synthesize meat that’s cheaper than raising animals. There’s maybe a 70% chance that we’ll develop good synthetic meat, and a 50% chance that we eliminate factory farming for some other reason even if we don’t. That gives us P(factory farming exists in the far future) = 0.2.
If we end factory farming by convincing people that factory farming is bad, that pretty much necessitates that people care about factory farming, although the change in values might not carry through to the far future. So let’s say P(society will care about factory farming if we end factory farming via advocacy) is 0.5.
Given that factory farming exists in the far future, there will probably be somewhere in (1010, 1013) factory-farmed animals per solar system (that’s 1-10 animals per human). Combining this with the numbers from the [biological condition](http://mdickens.me/2016/04/17/preventing_human_extinction,_now_with_numbers!/#we-colonize-other-planets-aka-biological-condition), that means the far future has an estimated 2x1033 to 5x1037 factory-farmed animals.
The total number of factory farmed animals today is about 1010, and my 80% CI on the QALYs per factory-farmed animal prevented is `(0.2, 4)` (being 0.2 if chickens are substantially less sentient than humans).
Based on the numbers published by ACE, let’s say $1 prevents between 5 and 50 years of factory farm life.
In the biological condition, the 80% CI of the expected value per dollar spent alleviating factory farming is (8x1020, 2x1025). Discounting this by 50% (because it’s in the biological condition only) makes it (4x1020, 1x1025).
### Estimates for Wild Animal Suffering
In the far-future hedonium condition, there are no wild animals. The remainder of this section just considers the biological condition.
Filling lots of planets with wild animals doesn’t do much to further human interests so it’s likely we won’t do this. At the same time, lots of people value nature. Even if we do create wild animals, we might care enough to make their lives net good (we can probably do this if we’re capable of terraforming planets). Let’s say P(wild animal suffering exists in the far future) is 0.4.
If we eliminate factory farming, this probably increases people’s concern for wild animals by making people less speciesist. The connection is not that strong because people generally feel like they have special obligations not to hurt animals but that doesn’t mean they have obligations to help them. Let’s put P(people care about wild animal suffering if we end factory farming) at 0.1. (This really means that the difference between the factory-farming-exists condition and does-not-exist condition is 0.1.)
How much wild animal suffering will there be, given that wild animals exist? I’m going to use [Brian Tomasik’s estimates](http://reducing-suffering.org/how-many-wild-animals-are-there/) here, and discount bugs by 10 to 100 (I’d give maybe a 10% chance that they’re sentient, and if they are, they matter 1/10 as much as more complex animals). Then there are currently 1016 to 1019 adjusted sentient animals on earth (bugs still dominate even after adjusting for expected sentience). So let’s say each solar system has 1016 to 1020 wild animals. Multiplying this by the number of stars and length of the far future gives an 80% CI of (3x1039, 4x1044).
For simplicity, let’s say wild animal lives are -1 QALYs per year. (This is another input where people’s intuitions will differ wildly, and you should change this on [the spreadsheet](http://mdickens.me/materials/QuantitativeModel.xlsm) if you disagree with it.) The amount of factory farming is still the same as before, and so is the cost, so our expected value per dollar spent (after dividing by 2) is (7x1026, 1x1032).
### Estimates for Sentient Simulations
In the [computronium condition](http://mdickens.me/2016/04/17/preventing_human_extinction,_now_with_numbers!/#we-spread-computronium), there’s the possibility that we would create simulations that suffer a lot, but it doesn’t seem all that likely, and if we did, we’d still want to spend most resources on ourselves. Let’s say 10% chance of creating suffering simulations, and if we do, they make up 1 to 10% of the minds. If we combine this with the previous estimate for the number of happy lives in the far future, this gives an 80% CI of (2x1045, 4x1048).
If we say that eliminating factory farming increases the probability that people care about sentient simulations by 1%, that gives us an expected value per dollar spent of (1x1032, 3x1035).
In the biological condition, let’s say there’s a 50% chance of making suffering simulations.
Simulations are a lot more efficient than wild animals, but we’d want to devote resources to lots of things other than creating simulations, so let’s say there are 10% as many suffering simulations as wild animals. That means the 80% CI is (3x1038, 4x1043).
All the other numbers are the same as before. When we combine everything we get an expected value per dollar of (8x1025, 1x1031).
Averaging these two conditions gives (2x1031, 4x1035).
AI-Targeted Values Spreading
----------------------------
In [Charities I Would Like to See](http://mdickens.me/2015/09/12/charities_i_would_like_to_see/), I raised the idea of targeting values spreading at people who seem particularly influential on the far future. So let’s look at the idea of spreading concern for non-human minds to AI researchers.
Inputs:
* (a) P(friendly AI gets built)
* (b) P(FAI is good for animals by default)
* (c) P(animals exist in the far future)
* (d) value difference between animal-friendly and -unfriendly far future
* (e) number of AI researchers with significant influence on a friendly AI at the time it’s built
* (f) values propagation factor
* (g) cost of convincing one AI researcher to care about animals
The expected value per dollar spent is
```
a * (1 - b) * c * d * (1 / e) * f * (1 / g)
```
Let’s say there’s a 30% chance that friendly AI gets built (it’s a hard problem and we might go extinct some other way). There’s maybe a 80% chance that a friendly AI is good for animals by default, and a 10% chance that animals exist in the far future given that we have friendly AI. This number is lower than the unconditional probability that animals exist because I expect an AI would not want to “waste” lots of space on making animals everywhere.
For the value difference (d), let’s take the number from the estimate for sentient simulations of (2x1045, 4x1048).
As before, take the size of the AI safety community as `(200, 10,000)`.
Let the values propagation factor be `(1, 3)`. It doesn’t seem likely that this would vary much.
Based on this, the value of convincing one AI researcher to care about animals is (3x1039, 2x1043). I have no idea how much this would cost since as far as I know no one’s ever tried to spend money to do this. The opportunity costs of talent might matter more than expenditures. But I think a good CI here is (103, 106). I don’t think it would ever cost $1 million to convince one AI researcher, but there might be a low probability of success (e.g. we spend $100,000 for a 10% chance of success).
This gives a final 80% CI of (3x1034, 2x1039).
Discussion
==========
Have I fully incorporated my qualitative analyses?
--------------------------------------------------
In previous essays I’ve raised a bunch of points that affected my understanding of the value of existential risk reduction and values spreading:
1. [Future humans might spread wild animal suffering.](http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/)
2. [Future humans might care about all sentient beings.](http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/)
3. [If we want to spread values, it’s non-obvious which values to spread.](http://mdickens.me/2015/09/10/on_values_spreading/)
4. [Future humans might fill the universe wih human-like beings.](http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/)
5. [Future humans might create suffering simulations.](http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/)
6. [Debiasing people could lead to correct values.](http://mdickens.me/2015/09/10/on_values_spreading/)
7. [Value shifts might not carry through to the far future.](http://mdickens.me/2015/09/10/on_values_spreading/)
8. [The far future might be filled with hedonium.](http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/)
9. [Values spreading has better feedback loops.](http://mdickens.me/2015/12/19/feedback_loops_for_values_spreading/)
I’ve heard some people argue that quantitative models are often inferior to qualitative approaches because quantitative models don’t account for all your intuitions or perspectives. But that means if you can build a quantitative model that *does* account for all your perspectives, it ought to be better than a more intuitive approach.
Here’s how my model incorporates each of these:
1. Explicit calculation.
2. Explicit calculation.
3. I just look at two types of values spreading (animal advocacy in general, and targeted animal advocacy to AI safety researchers).
4. Explicit calculation.
5. Explicit calculation.
6. This is included in my estimates of the probability that factory farming/wild animal suffering/etc. will exist in the far future.
7. This is included in my estimate of the probability that we end animal suffering in the future given that we eliminate factory farming.
8. Explicit calculation.
9. This is included in my estimates of the probability of success for different interventions (e.g. the probability of building a friendly AI).
Notes
-----
1. Here σ is not the same thing as standard deviation; it’s the standard deviation of the log base 10 of the distribution. [↩](#fnref:2) [↩2](#fnref:2:1)
2. It could also have effects on sentient aliens, other animals in captivity, and possibly other groups as well. For now I’ll leave these out because I believe they don’t have a large enough effect to justify the added model complexity. [↩](#fnref:6)
|
d3fcdde8-5df9-46a0-a464-e427825444ea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Gut Renovating Another Bathroom
After our long-term tenants bought a house and moved out, we had a few months free before we had new downstairs neighbors. A good time to gut renovate the first floor bathroom! It was a lot of work, stressful at times when I was under time pressure, but also, a lot of fun. It helped that this was my third time, after installing a new bathroom on the third floor and gut renovating the second floor bathroom.
Here's the timeline:
* Sun 11/18: first day of demolition
* Sat 11/19: second day of demolition. Overall demo went about as expected. Exhausting, but no surprises.
You can see there's space for a few shelves in the corner behind those pipes. I was originally planning to put some in but ran out of time and boxed it in instead. I'd like to come back and open this up at some point.
* Mon 12/9: floor framing and subfloor, with help from Chris
* Sun 1/1: wall framing and nailers
* Mon 1/2: more wall framing in the evening
* Tue 1/3: electrician rough electric
* Mon 1/8: plumber rough plumbing day 1
* Tue 1/9: plumber rough plumbing day 2
This took two days because they needed to completely redo the toilet and sink drains: the toilet drain was cast iron and falling apart, while the sink drain was illegally tied into the kitchen sink.
* Wed 1/10: plumber rough inspection
* Fri 1/12: electrician rough inspection
* Tue 1/16: my rough inspection in the morning, vent piping in the evening
* Sat 1/20: ceiling drywall in the morning with help from Peter
* Sun 1/21: wall drywall in the morning with help from Peter, tub surround in the afternoon with help from Devan, wall drywall in the evening alone
I caulked the edge of the tub surround here, which was too early: I used silicone caulk, which doesn't hold paint.
* Tue 1/23: drywall touchups in the evening
* Wed 1/24: screw inspection
* Tue 1/30: mudding in the evening
* Wed 1/31: mudding in t
|
2abeccaa-4476-4edf-9f6d-f37163f61a72
|
trentmkelly/LessWrong-43k
|
LessWrong
|
London meetup, Sunday 1 May, 2pm, near Holborn
The London LessWrong meetup takes place on the first Sunday of every other month. We're getting good at these now! The next one is 2011-05-01 14:00, at the Shakespeares Head (official page) on Kingsway near Holborn Tube station. Note that there's more than one pub in London with that name, so make sure you get the right one. As always, we'll have a big picture of a paperclip on the table so you can find us; I look like this. Hope to see lots of you there!
|
604fe3a5-ca14-460f-9d00-288c0ccdcb6f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Could Defeat All Of Us Combined
Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.
I've been working on a new series of posts about the most important century.
* The original series focused on why and how this could be the most important century for humanity. But it had relatively little to say about what we can do today to improve the odds of things going well.
* The new series will get much more specific about the kinds of events that might lie ahead of us, and what actions today look most likely to be helpful.
* A key focus of the new series will be the threat of misaligned AI: AI systems disempowering humans entirely, leading to a future that has little to do with anything humans value. (Like in the Terminator movies, minus the time travel and the part where humans win.)
Many people have trouble taking this "misaligned AI" possibility seriously. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. They find the idea of AI itself going to war with humans to be comical and wild. I'm going to try to make this idea feel more serious and real.
As a first step, this post will emphasize an unoriginal but extremely important point: the kind of AI I've discussed could defeat all of humanity combined, if (for whatever reason) it were pointed toward that goal. By "defeat," I don't mean "subtly manipulate us" or "make us less informed" or something like that - I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained.
I'm not talking (yet) about whether, or why, AIs might attack human civilization. That's for future posts. For now, I just want to linger on the point that if such an attack happened, it could succeed against the combined forces of the entire world.
* I think that if you believe this, you should already be worried about misaligned AI,1 before any analysis of how or why an AI might form its own goals.
* We generally
|
8d352436-2224-441b-9641-059c17a5c7a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Agency and the unreliable autonomous car
Financial status: This is independent research, now supported by a grant. I welcome financial support.
Epistemic status: I believe ~85% that the technical argument presented in this piece is correct.
----------------------------------------
Outline
* This is my attempt to explain the basic Löb situation with the 5-and-10 problem.
* This post considers an autonomous car choosing between a long route and a slow route, with the goal of minimizing the time to its destination.
* If the autonomous car makes its decisions using a certain seemingly-reasonable algorithm based on proof search, this post shows that it may nonsensically take the slow route.
* This is the 5-and-10 problem phrased in terms of autonomous driving.
Parable of the unreliable autonomous car
A long time ago, in a far away place, there was an autonomous car.
This particular autonomous car was programmed to use the agent model in its reasoning. That is, it was programmed to model the world as consisting of an agent and an environment, the agent being itself, and the environment being a network of roads that it would navigate each day. This is how the autonomous car was programmed to model the world.
Beyond using the agent model, this autonomous car had been given, by its designers, an accurate model of its own behavior. It could tell you how its own decision algorithm would behave under any set of circumstances. It could tell you this without being faced with those actual circumstances because it had a model of itself, and it could answer questions based on that model.
One day, a passenger got into the autonomous car and gave it a destination. There were two ways for the autonomous car to get there: a fast route, which would get it there at 1pm, and a slow route, which would get it there at 2pm. The goal of the autonomous car was to get to its destination as quickly as possible. This parable is about the reasoning followed by the car in deciding which of these two routes to take.
Now in
|
aa9515b9-f5ea-4d65-8d73-e9024c9423e2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How can one measure their cognitive capacities during lucid dreaming?
None
|
c3add3bc-4a77-473e-83f2-1a9f3921e6b9
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
What is “protein folding”? A brief explanation
Today Google DeepMind [announced](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology) that their deep learning system AlphaFold has achieved unprecedented levels of accuracy on the “protein folding problem”, a grand challenge problem in computational biochemistry.
What is this problem, and why is it hard?
I spent a couple years on this problem in a junior role in the early days of [D. E. Shaw Research](https://www.deshawresearch.com/), so it’s close to my heart. Here’s a five-minute explainer.
---
Proteins are long chains of amino acids. Your DNA encodes these sequences, and RNA helps manufacture proteins according to this genetic blueprint. Proteins are synthesized as linear chains, but they don’t stay that way. They fold up in complex, globular shapes:
*A protein from the bacteria Staphylococcus aureus.*[*Wikimedia / E A S*](https://commons.wikimedia.org/wiki/File:Protein_A_1DEE_1L6X.png)One part of the chain might coil up into a tight spiral called an α-helix. Another part might fold back and forth on itself to create a wide, flat piece called a β-sheet:
[*Wikimedia / Thomas Shafee*](https://commons.wikimedia.org/wiki/File:Alpha_beta_structure_(full).png)The sequence of amino acids itself is called *primary structure*. Components like this are called *secondary structure*.
Then, these components themselves fold up among themselves to create unique, complex shapes. This is called *tertiary structure*:
*An enzyme from the bacteria Colwellia psychrerythraea.* [*Flickr / Argonne National Lab*](https://www.flickr.com/photos/35734278@N05/3762337272)*The protein RRM3.* [*Wikimedia / Biasini et al*](https://commons.wikimedia.org/wiki/File:RRM3_Protein_Model.png)This looks like a mess. Why does this big tangle of amino acids matter?
Protein structure is not random! Each protein folds in a specific, unique, and largely predictable way that is essential to its function. The physical shape of a protein gives it a good fit to targets it might bind with. Other physical properties matter too, especially the distribution of electrical charge within the protein, as shown here (positive charge in blue, negative in red):
*Surface charge distribution of Oryza sativa Lipid Transfer Protein 1.* [*Wikimedia / Thomas Shafee*](https://commons.wikimedia.org/wiki/File:Uncut_1UVB.png)If a protein is essentially a self-assembling nanomachine, then the main purpose of the amino acid sequence is to produce the unique shape, charge distribution, etc. that determines the protein’s function. (*How* exactly this happens, in the body, is still not fully understood, and is an active area of research.)
In any case, understanding structure is crucial to understanding function. But the DNA sequence only gives us the primary structure of a protein. How can we learn its secondary and tertiary structure—the exact shape of the blob?
This problem is called “protein structure determination”, and there are two basic approaches: measurement and prediction.
Experimental methods can measure protein structure. But it isn’t easy: an optical microscope can’t resolve the structures. For a long time, X-ray crystallography was the main method. Nuclear magnetic resonance (NMR) has also been used, and more recently, a technique called cryogenic electron microscopy (cryo-EM).
*X-ray diffraction pattern of a SARS protease.* [*Wikimedia / Jeff Dahl*](https://commons.wikimedia.org/wiki/File:X-ray_diffraction_pattern_3clpro.jpg)But these methods are difficult, expensive, and time-consuming, and they don’t work for all proteins. Notably, proteins embedded in the cell membrane—such as the ACE2 receptor that COVID-19 binds to—fold in the lipid bilayer of the cell and are difficult to crystallize.
[*Wikimedia / CNX OpenStax*](https://commons.wikimedia.org/wiki/File:OSC_Microbio_03_03_PlasmaMem.jpg)Because of this, we have only determined the structure of a tiny percentage of the proteins that we’ve sequenced. Google notes that there are 180M protein sequences in the Universal Protein database, but only ~170k structures in the Protein Data Bank.
We need a better method.
---
Remember, though, that the secondary and tertiary structures are mostly a function of the primary structure, which we know from genetic sequencing. What if, instead of *measuring* a protein’s structure, we could *predict* it?
This is “protein structure prediction”, or colloquially, the “protein folding problem,” and computational biochemists have been working on it for decades.
How could we approach this?
The obvious way is to directly simulate the physics. Model the forces on each atom, given its location, charge, and chemical bonds. Calculate accelerations and velocities based on that, and evolve the system step by step. This is called “molecular dynamics” (MD).
The problem is that this is *extremely* computationally intensive. A typical protein has hundreds of amino acids, which means thousands of atoms. But the environment also matters: the protein interacts with surrounding water when folding. So you have more like 30k atoms to simulate. And there are electrostatic interactions between every pair of atoms, so naively that’s ~450M pairs, an O(N2) problem. (There are smart algorithms to make this O(N log N).) Also, as I recall, you end up needing to run for something like 109 to 1012 timesteps. It’s a pain.
OK, but we don’t have to simulate the entire folding process. Another approach is to find the structure that *minimizes potential energy*. Objects tend to come to rest at energy minima, so this is a good heuristic. The same model that gives us forces for MD can calculate energy. With this approach, we can try a whole bunch of candidate structures and pick the one with lowest energy. The problem, of course, is where do you get the structures from? There are just way too many—molecular biologist Cyrus Levinthal estimated 10300 (!) Of course, you can be much smarter than trying all of them at random. But there are still too many.
So there have been many attempts to get faster at doing these kinds of calculations. Anton, the supercomputer from D. E. Shaw Research, used specialized hardware—a custom integrated circuit. IBM also has a computational bio supercomputer, Blue Gene. Stanford created Folding@Home to leverage the massively distributed power of ordinary home computers. The Foldit project from UW makes folding a game, to augment computation with human intuition.
Still, for a long time, no technique was able to predict a wide variety of protein structures with high accuracy. A biannual competition called CASP, which compares algorithms against experimentally measured structures, saw top scores of 30–40%… until recently:
*Median accuracy of predictions in the free modelling category for the best team each year.* [*Google DeepMind*](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology)So how does AlphaFold work? It uses multiple deep neural nets to learn different functions relevant to each protein. One key function is a prediction of the final *distances* between pairs of amino acids. This guides the algorithm to the final structure. In one version of the algorithm (described in [Nature](https://discovery.ucl.ac.uk/id/eprint/10089234/1/343019_3_art_0_py4t4l_convrt.pdf) and [Proteins](https://onlinelibrary.wiley.com/doi/full/10.1002/prot.25834)), they then derived a potential function from this prediction, and applied simple gradient descent—which worked remarkably well. (I can’t tell from what I’ve been able to read today if this is still what they’re doing.)
A general advantage of AlphaFold over some previous methods is that it doesn’t need to make assumptions about the structure. Some methods work by splitting the protein into regions, figuring out each region, then putting them back together. AlphaFold doesn’t need to do this.
---
DeepMind seems to be calling the protein folding problem solved, which strikes me as simplistic, but in any case this appears to be a major advance. Experts outside Google are calling it “[fantastic](https://www.sciencemag.org/news/2020/11/game-has-changed-ai-triumphs-solving-protein-structures)”, “[gamechanging](https://www.nature.com/articles/d41586-020-03348-4)”, etc.
Between protein folding and CRISPR, genetic engineering now has two very powerful new tools in its toolbox. Maybe the 2020s will be to biotech what the 1970s were to computing.
Congrats to the researchers at DeepMind on this breakthrough!
|
c4018156-d958-4654-93df-33c56abced1d
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
A high-precision abundance analysis of the nuclear benchmark star HD 20
1 Introduction
---------------
Among the cornerstones of Galactic archeology are studies of metal-poor stars as bearers of fossil records of Galactic evolution. In this respect, revealing the kinematics and chemistry of this relatively rare subclass of stars provides vital insights into the build-up of galaxy components like the Galactic halo and the origin of the chemical elements.
Nucleosynthesis of iron-peak elements, from Si to approximately Zn (atomic numbers 14≤Z≤30), is thought to be dominated by explosive nucleosynthesis, from both thermonuclear supernovae (Type Ia exploding white dwarfs), and core-collapse supernovae (CCSNe, massive stars), whereas major production of elements from Li up to and including Si is thought to be dominated by hydrostatic burning processes (Woosley95; Nomoto06; Kobayashi19).
Beyond the iron peak, electrostatic Coulomb repulsion ensures that charged-particle reactions play a minuscule role in element synthesis (with the possible exception of proton-rich nuclei). Temperatures high enough for charged particles to overcome the Coulomb barrier photo-dissociate the larger nuclei. Thus, most of the elements heavier than the iron peak result from neutron captures, which are divided into the slow (s) and rapid (r) processes by their capture rates with respect to the β-decay timescale (Burbidge57; Cameron57). The involved neutron densities differ by many orders of magnitude and are thought to be n<108 cm−3 and n≳1020 cm−3 for the s- and r-process, respectively (Busso01; Meyer94). In recent years, an additional, so-called intermediate (i) process – representing neutron densities in between typical r- and s-values – is gaining attention as models are capable of reproducing some peculiar chemical patterns typically found in C-rich metal-poor stars (e.g., Roederer16; Hampel16; Koch19; Hampel19).
The main s-process is believed to be active during the thermally pulsing phases of asymptotic giant branch (AGB) stars, which provide the required low neutron fluxes (e.g., Gallino98; Straniero06; Lugaro12; Karakas14), whereas several sites have been proposed to generate neutron-rich environments for the r-process to occur. Viable candidates are neutrino-driven winds in CCSNe (Arcones07; Wanajo13), jets in magneto-rotational supernovae (MR SNe, Cameron03; Moesta18), and neutron star mergers (NSMs, e.g., Lattimer74; Chornock17). The latter site lately gained a lot of attention since, for example, Pian17 found indications for short-lived r-process isotopes in the spectrum of the electromagnetic afterglow of the gravitational wave event GW170817 that was detected and confirmed as an NSM by the LIGO experiment (Abbott17). The authors, however, could not single out individual elements. Only later, direct spectroscopic investigations revealed the newly produced neutron-capture element Sr in this NSM (Watson19). Nonetheless, as stressed by – for example – Cote19 and Ji19, other sites like MR SNe may still be needed to explain the full budget of r-process elements observed in the Galaxy. Nuclear benchmark stars allow for detailed studies of each of the neutron-capture processes.
From an observational point of view, there have been a number of spectroscopic campaigns that specifically targeted metal-poor stars to constrain the nucleosynthesis of heavy elements in the early Milky Way, among which are – to name a few – Beers05, Hansen12; Hansen14a, and the works by the r-process alliance (e.g., Hansen18a; Sakari18, and follow-up investigations). Following Beers05, the rare class of r-process-rich stars is commonly subdivided by a somewhat arbitrary cut into groups of moderately enhanced r-I (0.3≤[Eu/Fe]111Throughout this paper, we employ the standard bracket notation [X/Y]=(logϵ(X)−logϵ(Y))−(logϵ(X)−logϵ(Y))⊙, with logϵ(X)=log(nX/nH)+12 being the abundance of the chemical element X.≤+1.0 dex; [Ba/Eu]<0 dex) and strongly enhanced r-II ([Eu/Fe]>+1.0 dex; [Ba/Eu]<0 dex) stars. In the context of this classification, our benchmark star HD 20 falls in the r-I category. Recently, Gull18 reported on the first finding of an r-I star with a combined “r+s” pattern, which was explained by postulating mass transfer from a companion that evolved through the AGB phase.
| X | logϵ(X) [dex] | Δ [dex] |
| --- | --- | --- |
| | Burris00 | Barklem05 | |
| Fe | −6.28 | −5.92 | 0.36 |
| Sr | −1.56 | −1.51 | 0.05 |
| Y | −0.80 | −0.62 | 0.18 |
| Zr | −1.67 | −1.40 | 0.27 |
| Ba | −1.32 | −0.86 | 0.46 |
| La | −0.22 | −0.08 | 0.30 |
| Nd | −0.69 | −0.26 | 0.43 |
| Eu | −0.11 | −0.27 | 0.16 |
Table 1: Comparison of abundances for HD 20 in common between Burris00 and Barklem05. Typical errors are 0.20 to 0.25 dex.
Here, we present a comprehensive spectroscopic abundance analysis of HD 20, an r-process-rich star at the peak of the halo metallicity distribution function ([Fe/H]=−1.60 dex) with a heavy-element pattern that suggests pollution with s-process material.
Based on the full 6D phase-space information from the second data release (DR2) of the Gaia mission (Gaia18), Roederer18a concluded that HD 20 may be chemodynamically associated with two other metal-poor halo stars with observed r-process excess. Based on its kinematics – characterized by a highly eccentric orbit (e=0.975+0.002−0.004) and a close pericentric passage (rperi=0.19+0.04−0.02 kpc) – and its low metallicity, the authors speculate that HD 20 and its associates may have been accreted from a disrupted satellite.
Among others, HD 20 has been a subject of two previous abundance studies by Burris00 and Barklem05 who reported eight and ten abundances for elements with Z≥30, respectively. Both groups employed medium-resolution (R=λ/FWHM∼20000) spectra at signal-to-noise ratios (S/N) slightly above 100 pixel−1. Table [1](#S1.T1 "Table 1 ‣ 1 Introduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") lists the findings for the eight elements that are in common between both works and we note systematic disagreements – in a sense that the abundances by Burris00 generally are above Barklem05 – exceeding even the considerable quoted errors of about 0.2 dex. The authors adopted very similar effective temperatures (Teff) for their analyses (5475 K versus 5445 K), while the employed stellar surface gravities (logg) and microturbulent velocities (vmic) differ strongly by +0.41 dex and −0.30 km s−1. Inconsistencies between the studies are likely to be tied to these discrepancies as already recognized by Barklem05.
Our work aims at painting a complete picture of the chemical pattern in HD 20 consisting of 58 species from the primordial light element Li to the heavy r-process element U. To this end, a compilation of high-quality, newly obtained and archival spectra was used, allowing for many elemental detections with high internal precisions. Furthermore, specific attention was devoted to the determination of accurate stellar parameters in order to mitigate the effect of systematic error contributions to the robustness of the deduced pattern. In this respect, an essential building block of our analysis is a highly accurate and precise stellar surface gravity from an asteroseismic analysis of the light curve that was obtained by NASA’s Transiting Exoplanet Survey Satellite (TESS, Ricker15). Hence, we establish HD 20 as a new metal-poor benchmark star – both in terms of fundamental properties as well as complete abundance patterns – which, in light of its bright nature (V≈9 mag), provides an ideal calibrator for future spectroscopic surveys.
This paper is organized as follows: In Sect. [2](#S2 "2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") we introduce the spectroscopic, photometric, and astrometric data sets employed throughout the analyses. Sect. [3](#S3 "3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") is dedicated to the detailed discussion of our derived stellar parameters, followed by Sect. [4](#S4 "4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), which presents a description of the adopted procedures for the abundance analysis. Our results for HD 20 and constraints drawn from its abundance pattern can be found in Sect. [5](#S5 "5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). Finally, in Sect. [6](#S6 "6 Summary and Conclusions ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), we summarize our findings and provide an outlook for further studies.
2 Observations and data reduction
----------------------------------
###
2.1 Spectroscopic observations
We obtained a spectrum of HD 20 in the night of August 15, 2013 using both arms of the Magellan Inamori Kyocera Echelle (MIKE) spectrograph (Bernstein03). An exposure of 1093 s integration time was taken using a slit width of 0.5″and a 2x1 on-chip-binning readout mode. This setup allowed for a full wavelength coverage from 3325 to 9160 Å at a resolution of R≈45000.

Figure 1: S/N as a function of wavelength for the employed spectra of HD 20 from all three high-resolution spectrographs. Since the dispersion spacing between adjacent pixels varies among the instruments, we present the S/N per 1 Å.
The raw science frame was reduced by means of the pipeline reduction package by Kelson03, which performs flat-field division, sky modeling and subtraction, order tracing, optimal extraction, and wavelength calibration based on frames obtained with the built-in ThAr lamp. For the MIKE red spectrum, the reduction routine combined 26 ten-second ”milky flat” exposures, taken using a quartz lamp and diffuser, resulting in a S/N of approximately 100 per 2x1 binned CCD pixel near the middle of the array, per exposure. This gave a total S/N of about 500 pixel−1 in the combined flat. Due to lower flux in the blue quartz lamp, the milky flat exposure time was set to 20 s per frame. In addition, the 26 blue-side milky flat exposures were supplemented with seven ten-second exposures of a hot star, HR 7790, taken with the diffuser. The median seeing of 0.72″, corresponding to 5.4 CCD pixels FWHM, indicates that the flux for each wavelength point was taken from approximately 2 FWHM, or about 11 pixels. At the Hα wavelength the pixels are 0.047 Å wide, indicating roughly 21 pixels per Å. These details suggest that the S/N of the final, extracted, flat field flux is 5000 to 7000 Å−1, significantly greater than the S/N of the stellar spectrum. The resulting S/N of the extracted object spectrum ranges from about 40 Å−1 at the blue-most edge to more than 1200 Å−1 redward of 7000 Å. We present the detailed distribution of S/N with wavelength in Fig. [1](#S2.F1 "Figure 1 ‣ 2.1 Spectroscopic observations ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).").
Our MIKE observation was complemented by data retrieved through the ESO Advanced Data Products (ADP) query form, with two additional, reduced high-resolution spectra for this star: The first is a 119 s reduced exposure (ID 090.B-0605(A)) from the night of October 13, 2012 using the UVES spectrograph with a dichroic (Dekker00) at the ESO/VLT Paranal Observatory. For the blue arm, a setup with an effective resolution of R∼58600 centered at a wavelength of 390 nm (UVES 390) was chosen, whereas the red arm was operated at R∼66300 with a central wavelength of 580 nm (UVES 580). Especially the UVES 390 exposure poses an additional asset, since it supersedes our MIKE spectrum in the UV at higher S/N and – more importantly – bluer wavelength coverage and considerably higher resolution.
The second ESO spectrum was taken on December 29, 2006 (ID 60.A-9036(A)) employing the HARPS spectrograph (Mayor03) at the 3.6 m Telescope at the ESO La Silla Observatory. With a similar wavelength coverage and at substantially lower S/N than the UVES spectra, this observation adds a very high resolution of 115 000 that was used to corroborate our findings for the intrinsic line broadening (Sect. [3.4](#S3.SS4 "3.4 Line broadening ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). The S/N values reached with both ESO spectrographs are shown in Fig. [1](#S2.F1 "Figure 1 ‣ 2.1 Spectroscopic observations ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") alongside the distribution for MIKE.
###
2.2 Radial velocities and binarity

Figure 2: Comparison of literature time series for vhelio by Carney03 and Hansen15 to measurements from the three spectra employed throughout this study (see legend). The gray and blue dashed lines resemble the median values for the two reference samples.
All spectra were shifted to the stellar rest frame after determining radial velocities through cross correlation with a synthetic template spectrum of parameters that are representative for HD 20 (see Sect. [3](#S3 "3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") and Table [2](#S2.T2 "Table 2 ‣ 2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). For the HARPS and UVES spectra, we established the radial velocity zero point using standard stars that were observed in the same nights (HD 69830 and HD 7041, respectively, with reference values from Soubiran18), whereas we used the telluric O2 B-band at ∼6900 Å to calibrate the MIKE spectrum. This way, we found vhelio=−57.16±0.15, −57.04±0.26, and −56.86±0.44 km s−1 from the HARPS, UVES, and MIKE spectra of HD 20. These findings are consistent with the mean value −57.18±0.11 km s−1 from the radial-velocity monitoring program by Carney03 and considerably above the reported value by Hansen15 of −57.914±0.041 km s−1. A graphical juxtaposition is shown in Fig. [2](#S2.F2 "Figure 2 ‣ 2.2 Radial velocities and binarity ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). We note that – owing to the usage of different spectrographs and resolutions – our radial velocity analysis is by no means homogeneous and slight discrepancies are therefore to be expected. Nevertheless, the observed offset with respect to Hansen15 is significant. The anomaly with respect to Carney03 has already been noted by Hansen15 and was linked to a difference in the applied scales. Apart from this systematic bias, over a time span of 10011 days, there is no indication of real radial velocity variations. As a consequence, a binary nature of HD 20 can be ruled out with high confidence.
###
2.3 Photometry and astrometric information
| | | | | |
| --- | --- | --- | --- | --- |
| Quantity | Value | Unit | Sourcea | Note |
| (Astro-) physical constants |
| L0 | 3.0128⋅1028 | W | 1 | |
| L\sun | 3.828⋅1026 | W | 1 | |
| Mbol,\sun | 4.74 | mag | 1 | |
| R\sun | 6.9577±0.0014⋅108 | m | 2 | |
| Teff,\sun | 5771 | K | 2 | |
| logg\sun | 4.438 | dex | 3 | |
| Observables |
| B | 9.65±0.02 | mag | 4 | |
| V | 9.059±0.013 | mag | 5 | |
| J2MASS | 7.704±0.030 | mag | 6 | |
| H2MASS | 7.348±0.029 | mag | 6 | |
| Ks,2MASS | 7.249±0.031 | mag | 6 | |
| b−y | 0.434±0.003 | mag | 7 | |
| E(B−V) | 0.0149±0.0005 | mag | 8 | |
| G | 8.849 | mag | 9 | |
| GBP−GRP | 0.886 | mag | 9 | |
| ϖ | 1.945±0.053 | mas | 9 | |
| μαcosδ | 132.434±0.066 | mas yr−1 | 9 | |
| μδ | −39.917±0.058 | mas yr−1 | 9 | |
| vhelio | −57.914±0.041 | km s−1 | 10 | |
| fmax | 27.19+1.34−1.17 | μHz | 11 | Sect. [3.1](#S3.SS1 "3.1 Surface gravity from TESS asteroseismology ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| Deduced quantities |
| d | 507±13 | pc | 11 | Sect. [2.3](#S2.SS3 "2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| logg | 2.366+0.020−0.021 | dex | 11 | Sect. [3.1](#S3.SS1 "3.1 Surface gravity from TESS asteroseismology ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| Teff | 5246+76−50 | K | 11 | Sect. [3.3.1](#S3.SS3.SSS1 "3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| vmic | 1.95+0.09−0.06 | km s−1 | 11 | Sect. [3.3.1](#S3.SS3.SSS1 "3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| [M/H] | −1.60±0.03 | dex | 11 | Sect. [3.3.1](#S3.SS3.SSS1 "3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| [Fe/H] | −1.60±0.03 | dex | 11 | Sect. [3.3](#S3.SS3 "3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| vmac | 5.82±0.03 | km s−1 | 11 | Sect. [3.4](#S3.SS4 "3.4 Line broadening ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| L/L⊙ | 60.9+4.6−4.3 | | 11 | Sect. [3.5](#S3.SS5 "3.5 Other structural parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| R/R⊙ | 9.44+0.46−0.43 | | 11 | Sect. [3.5](#S3.SS5 "3.5 Other structural parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| m/M⊙ | 0.76±0.08 | | 11 | Sect. [3.5](#S3.SS5 "3.5 Other structural parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| 12C/13C | 3.92+1.68−0.98 | | 11 | Sect. [4.3.2](#S4.SS3.SSS2 "4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| [α/Fe]b | 0.45 | dex | 11 | Sect. [5.3](#S5.SS3 "5.3 Abundances up to Zn (11 ≤ Z ≤ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
| age | 11.0±3.8 | Gyr | 11 | Sect. [5.6](#S5.SS6 "5.6 Cosmochronological age ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") |
222
$a$$a$footnotetext: References: (1): Mamajek15; (2): Heiter15 and references therein; (3): Prsa16; (4): Hoeg00; (5): AnthonyTwarog94; (6): Skrutskie06; (7): Hauck98; (8): Schlafly11; (9): Gaia18; (10): Hansen15; (11): This study.
$b$$b$footnotetext: [α/Fe] = 15[(Mg+Si+S+Ca+Ti)/Fe].
Table 2: Fundamental properties and stellar parameters entering this work.
Visual to near-infrared broadband photometric information for HD 20 was compiled from the literature and is listed in Table [2](#S2.T2 "Table 2 ‣ 2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") together with the respective errors and sources.
BVRCIC photometry was presented in Beers07 in a program that was targeting specific stars such as HD 20. Their results were also employed by Barklem05 and follow-up works by relying on the deduced parameters. The authors report V=9.236±0.001 mag, which is in strong disagreement to other findings in the literature. For example, the Hipparcos catalog (HIPPARCOS97) lists V=9.04 mag (used for temperature estimates in the spectroscopic studies of Gratton00; Fulbright03), while AnthonyTwarog94 provide a consistent value of V=9.059±0.013 mag (used, e.g., by Carney03). Furthermore, we estimate V≈9.00±0.05 mag from Gaia photometry and the analytical relation for (G−V) as a function of GBP and GRP333Sect. 5.3.7 of the Gaia Data Release 2 Documentation release 1.2: <https://gea.esac.esa.int/archive/documentation/GDR2/>. For completeness, we mention here the finding of V=9.40 mag by Ducati02, which again poses a strong deviation. We point out that HD 20 does not expose any signs of photometric variability as revealed by time-resolved photometry over 6.6 yr from DR9 of the All-Sky Automated Survey for Supernovae (ASAS-SN, Jayasinghe19) showing – again in agreement with most of the literature – V=9.01±0.08 mag.
Despite the rather low quoted internal uncertainties, we hence discard the photometry by Beers07 and Ducati02 from consideration as we suspect inaccuracies in the calibration procedures. A disruptive factor might be a blend contribution by a star about 14″ to the southeast, though we deem this an unlikely option since Gaia DR2 reports it to be much fainter (G=8.849 mag versus 14.675 mag). Consequently, we resorted to magnitudes for the B-band from the Tycho-2 catalog (Hoeg00) and for V by AnthonyTwarog94. For the near-infrared JHKs photometry we queried the 2MASS catalog (Skrutskie06) and the Strömgren color b−y is taken from Hauck98.
In terms of reddening we applied E(B−V)=0.0149±0.0005 mag, which was extracted from the reddening maps by Schlafly11. Whenever dereddened colors or extinction-corrected magnitudes were employed, we adopted the optical extinction ratio RV=A(V)/E(B−V)=3.1 attributed to the low-density interstellar medium (ISM) together with the reddening ratios E(color)/E(B−V) compiled in Table 1 of Ramirez05. Considering the overall very low reddening of HD 20, uncertainties in the latter ratios ought to have negligible impact on the quantities deduced from photometry.
A parallax of ϖ=1.945±0.053 mas was retrieved from Gaia DR2 from which we computed a geometric distance to HD 20 of d=507±13 pc444Though mathematically incorrect, the error on the inverse parallax can be considered symmetric in light of the small relative parallax error.. Here, we accounted for the quasar-based parallax zero point for Gaia DR2 of −0.029 mas (Lindegren18). Our finding is fully in line with the distance 507+14−13 pc derived in the Bayesian framework of BailerJones18.
3 Stellar parameters
---------------------
A crucial part of any spectroscopic analysis aiming at high-accuracy chemical abundances is the careful determination of the stellar parameters entering the model atmospheres needed when solving for the radiative transfer equations. Here, we outline the inference method applied for determining the parameters; effective temperature, surface gravity, microturbulence, metallicity, and line broadening.
Our adopted stellar parameters (Table [2](#S2.T2 "Table 2 ‣ 2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) are based on a spectroscopic analysis of Fe lines that were corrected for departures from the assumption of local thermodynamic equilibrium (LTE) together with asteroseismic information from the TESS mission, whereas several other techniques – both spectroscopic and photometric – including their caveats are discussed in Appendix [A](#A1 "Appendix A Alternative methods for determining stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).").
###
3.1 Surface gravity from TESS asteroseismology

Figure 3: Power spectral density (PSD) for HD 20 based on TESS light-curve data. The thick black line depicts a smoothed version of the PSD (thin gray line) and the best-fit model is shown in red. The blue shaded area indicates the power excess, whereas individual model components are represented by thin blue and black dashed lines.
Recently, Creevey19 showed in their time-resolved radial velocity analysis of the benchmark star HD 122563 that the asteroseismic scaling relation
| | | | |
| --- | --- | --- | --- |
| | loggseis.=logg⊙+log(fmaxfmax,⊙√TeffTeff,⊙) | | (1) |
based on the frequency fmax of maximum power of solar-like oscillations holds even in the regime of metal-poor and evolved stars. This motivated the exploration of the feasibility of an asteroseismic gravity determination for HD 20.
Fortunately, TESS measured a 27.4 days light curve with a two-minute cadence for this star during Sector 2. We employed the lightkurve Python package (lightkurve) to retrieve and reduce the data in order to calculate the power spectrum seen in Fig. [3](#S3.F3 "Figure 3 ‣ 3.1 Surface gravity from TESS asteroseismology ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). A power excess is identifiable around the frequency fmax≈27 μHz, which we attribute to solar-like oscillations.
We performed a fit to the obtained power spectrum following the prescriptions by Campante19. Therefore, we assumed a multi-component background model consisting of super-Lorentzian profiles that account for various granulation effects (see, e.g., Corsaro17, for details) as well as a constant noise component. The decision on the number of super-Lorentzian components for the background was made based on Bayesian model comparison using Bayes factors from evidences that were estimated with the Background555<https://github.com/EnricoCorsaro/Background> extension to the high-DImensional And multi-MOdal NesteD Sampling (DIAMONDS666<https://github.com/EnricoCorsaro/DIAMONDS>, Corsaro14) algorithm. We found that a model with three super-Lorentzian components has an insignificantly stronger support compared to a two-component one. The latter observation indicates that – given the data – the meso-granulation around frequencies of fmax/3≈9 μHz is indistinguishable from the component due to super-granulation and/or other low-frequency signals since they occupy a similar frequency range in HD 20. Thus, we adopted only two super-Lorentzians for the background fit. Finally, a Gaussian profile was used to represent the power excess on top of the background model.
In order to sample and optimize the high-dimensional parameter space of all involved model coefficients, we again made use of DIAMONDS. The resulting best-fit model, as well as its individual components, are depicted in Fig. [3](#S3.F3 "Figure 3 ‣ 3.1 Surface gravity from TESS asteroseismology ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). We estimated fmax=27.19+1.34−1.17 μHz which translates into logg=2.368+0.021−0.019 dex from Eq. [1](#S3.E1 "(1) ‣ 3.1 Surface gravity from TESS asteroseismology ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") using fmax,⊙=3050 μHZ (Kjeldsen95) and our adopted Teff. Owing to a weak coupling of the asteroseismic gravity to the temperature, we do not consider it in isolation, but refer the reader to Sect. [3.3.1](#S3.SS3.SSS1 "3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), where we outline the procedure to reach simultaneous parameter convergence.
###
3.2 Iron lines
A list of suitable Fe i and Fe ii lines for the purpose of deriving accurate stellar parameters was compiled using the Atomic Spectra Database (NIST\_ASD) of the National Institute of Standards and Technology (NIST). To this end, in order to mitigate biases by uncertain oscillator strengths (loggf), only those lines were considered that are reported to have measured loggf values with accuracy levels ≤10% (grade B or better in the NIST evaluation scheme) for Fe i and ≤25% (grade C or better) for Fe ii lines. The lines retrieved this way were checked to be isolated by means of spectrum synthesis (see Sect. [4.1](#S4.SS1 "4.1 Line list ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) and their EW was measured by EWCODE (Sect. [4.2](#S4.SS2 "4.2 Equivalent widths ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). From these, we added the ones that were measured with more than 5σ significance to the final list. Laboratory line strengths for the resulting 133 Fe i transitions were measured and reported by Fuhr88, BWL, BKK, and BK. For the 13 Fe ii lines that survived the cleaning procedure, the data are from Schnabel04.
###
3.3 Spectroscopic model atmosphere parameters
Throughout our analyses we employed the LTE radiation transfer code MOOG (Sneden73, July 2017 release) including an additional scattering term in the source function as described by Sobeck11777<https://github.com/alexji/moog17scat> as of November 2018. Our atmosphere models are based on the grid of 1D, static, and plane parallel ATLAS9 atmospheres by Castelli03 with opacity distribution functions that account for α-enhancements ([α/Fe]=+0.4, Sect. [5.3](#S5.SS3 "5.3 Abundances up to Zn (11 ≤ Z ≤ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Models for parameters between the grid points were constructed via interpolation in the grid. Here, we used the iron abundance [Fe/H] as proxy for the models’ overall metallicities [M/H], since we assume that all elements other than the α-elements follow the solar elemental distribution scaled by [Fe/H]. We note that the fact that HD 20 shows enhancements in the neutron-capture elements (Sect. [5.4](#S5.SS4 "5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) does not prevent this assumption, as the elements in question are only detectable in trace amounts with negligible impact on atmospheric properties such as temperature, density, gas-, or electron pressure.
Our Teff estimate is based on the spectroscopic excitation balance of Fe i lines. This technique relies on tuning the model temperature such that lines at different lower excitation potential (χex) yield the same abundance – in other words a zero-slope of logϵ(Fe\textsci) with χex is enforced. In this respect it is important to account for the circumstance that Fe i transitions are prone to substantial non-LTE (NLTE) effects in metal-poor stars, in a sense that not only the overall abundance is shifted toward higher values, but the magnitude of the effect varies with χex, too. Hence, as pointed out by Lind12, the Teff for which the excitation trend is leveled is shifted to systematically offset temperatures from the LTE case (see Fig. [4](#S3.F4 "Figure 4 ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). To overcome this problem, we computed NLTE abundance departures by interpolation in a close-meshed, precomputed grid of corrections that was created specifically for this project and parameter space (priv. comm.: M. Bergemann and M. Kovalev, see Bergemann12a; Lind12, for details).

Figure 4: Samples drawn from the posterior distribution of the stellar parameters (Eq. [2](#S3.E2 "(2) ‣ 3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Shown are the three different approaches M1 (gray), M2 (red), and M3 (blue) with the darkness of the colors illustrating the local density as estimated from a Gaussian kernel density estimate. The sample sizes are 2⋅104 and the adopted stellar parameters from method M3 (Tables [2](#S2.T2 "Table 2 ‣ 2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") and [3](#S3.T3 "Table 3 ‣ 3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) based on the median of the distributions are indicated using horizontal and vertical dashed lines. The correlation coefficients for pairs of two parameters in M3 are presented in the top left corner of each panel. The marginalized, one-dimensional distributions for the individual parameters are depicted by smoothed histograms at the top of each column.
The microturbulence parameter vmic is an ad-hoc parameter that approximatively accounts for the effects of otherwise neglected turbulent motions in the atmosphere, which mainly affect the theoretical line strength of strong lines. Here, we tuned vmic in order to erase trends of the inferred, NLTE-corrected abundances for Fe i features with the reduced line strength, RW=log(EW/λ).

Figure 5: Diagnostic plot for spectroscopic ionization balance. Shown are the histograms of the Fe abundance distributions ([Fe/H]=logϵ(Fe)−7.50 dex) at the adopted gravity (logg=2.24 dex from method M2, see Sect. [3.3.1](#S3.SS3.SSS1 "3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) both in LTE (gray filled) and NLTE (blue) for Fe i and in LTE for Fe ii (red). NLTE corrections for Fe ii remain well below 0.01 dex and are therefore neglected here. Points with error bars and arbitrary ordinate offsets at the top of the panel denote the means and standard deviations for each of the distributions of the same color.
Even though we prefer our highly accurate asteroseismic measurement over requiring spectroscopic ionization balance for determining logg, we discuss this method here to compare our findings to more classical spectroscopic parameter estimation methods that are widely used throughout the literature. The procedure is based on balancing abundances of neutral lines and singly ionized lines that are sensitive to changes in gravity (see also Appendix [B.2](#A2.SS2 "B.2 Impacts of model atmosphere errors ‣ Appendix B Abundance systematics ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Hence, by tuning the model gravity to erase discrepancies between the abundances deduced from both ionization states of the same element, logg can be inferred. Commonly, especially for FGK stars, the high number of available Fe lines in both ionization stages qualifies this species as an ideal indicator. While the modeling of Fe ii line strengths is insensitive to departures from LTE (¡0.01 dex), trustworthy gravities from the ionization balance can only be obtained once departures from LTE are removed from the Fe i abundances (e.g., Lind12). In particular, by neglecting NLTE influences, one would considerably underestimate logϵ(Fe\textsci) and consequently logg. This can be seen in Fig. [5](#S3.F5 "Figure 5 ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), where we compare Fe i under the LTE assumption to NLTE-corrected Fe i. Illustrated is the best abundance agreement – that is a perfect overlap of both the logϵ(Fe\textsci)NLTE and logϵ(Fe\textscIi)LTE888Since we find corrections for Fe ii that amount to less than 0.01 dex, we can assume logϵ(Fe\textscIi)LTE=logϵ(Fe\textscIi)NLTE. abundance distributions – obtained for logg=2.24 dex and [M/H]=−1.65 dex.
When assessing the error budget on [Fe/H], we caution that in this study’s realm of very high S/N spectra, random noise is not the prevailing origin for the line-by-line scatter of 0.10 dex and 0.03 dex for logϵ(Fe\textsci)NLTE and logϵ(Fe\textscii)LTE, respectively. In fact, looking at the abundance errors for individual lines from EW errors only (Table [8](#A3.T8 "Table 8 ‣ Appendix C Additional tables ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")), the random component remains well below 0.03 dex in the majority of cases. We conclude that the scatter is mostly of non-stochastic nature – for example due to uncertain oscillator strengths and flaws in the 1D assumption – and hence a division of the rms scatter by the square root of the number of lines is not a statistically meaningful quantifier of the metallicity error (see Appendix [B.1](#A2.SS1 "B.1 Instrument-induced versus other systematics ‣ Appendix B Abundance systematics ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") for more detailed discussions).
####
3.3.1 Bayesian inference
We emphasize that spectroscopic stellar parameters are strongly interdependent, that is, uncertainties and systematic errors of one quantity should not be considered in isolation. The usage of asteroseismic information mitigates this circumstance only to some degree as we show below. Hence, all model parameters need to be iterated until simultaneous convergence is reached. For this purpose, we used the emcee (ForemanMackey13) Python implementation of a Markov chain Monte Carlo (MCMC) sampler in order to draw samples from the posterior probability distribution P of the four model parameters Teff, [M/H], logg, and vmic,
| | | | |
| --- | --- | --- | --- |
| | P(\@vecx|\@vecy)∝L(\@vecy|\@vecx)⋅p(\@vecx), | | (2) |
with L being the likelihood function and p the prior. Here, \@vecy denotes the measured EWs and \@vecx represents the set of model atmosphere parameters. A flat prior of unity was assumed within the parameter space covered by our grid of NLTE corrections, and zero otherwise. We explored three different likelihoods representing the purely spectroscopic LTE (M1) and NLTE (M2) methods, as well as a mixed “NLTE + asteroseismology” (M3) approach. The likelihoods take the form
| | | | |
| --- | --- | --- | --- |
| | L=exp⎛⎝−a2χex2σ2a−b2RW2σ2b−Δ2[M/H],Fe\textscii2σ2Fe\textscii−Γi⎞⎠, | | (3) |
where aχex and bRW are the slopes of the deduced LTE (M1) or NLTE (M2 and M3) abundances, logϵFe\textsci(\@vecy,\@vecx,χex), with χex and RW for any given set of parameters \@vecx. The variances of the latter slopes were determined from repeated linear fits to bootstrapped samples by means of robust least squares involving a smooth L1 loss function. We prefer this non-parametric approach over ordinary least squares because of the systematically underestimated abundance errors from EW uncertainties alone (see previous Sect.). The third term in Eq. [3](#S3.E3 "(3) ‣ 3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") represents the difference between the model metallicity and Fe ii abundance, whereas Γi introduces the gravity sensitivity. For approaches M1 and M2 it represents the ionization (im-)balance
| | | | |
| --- | --- | --- | --- |
| | ΓMI/MII=Δ2Fe\textsci,Fe\textscii2σ2Δ | | (4) |
in the LTE and NLTE case, while we do not enforce ionization balance for M3, but use the asteroseismic information through
| | | | |
| --- | --- | --- | --- |
| | ΓM3=logg−loggseis.(Teff,fmax)2σ2loggseis.. | | (5) |
In this expression loggseis. is calculated from Eq. [1](#S3.E1 "(1) ‣ 3.1 Surface gravity from TESS asteroseismology ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). We emphasize that – while being clearly subject to biases in LTE – a perfect ionization balance may not be desirable even in the 1D NLTE case (M2), because it still lacks proper descriptions of hydrodynamical and 3D conditions. These might pose other sources for differences between abundances from Fe i and Fe ii at the true logg. In fact, there is a remaining marginal ionization imbalance logϵFe\textscii−logϵFe\textsci=0.08±0.10 when adopting approach M3.
| Method | Teff | [M/H] | logg | vmic |
| --- | --- | --- | --- | --- |
| | [K] | [dex] | [dex] | km s−1 |
| LTE (M1) | 5220+35−32 | −1.78+0.04−0.04 | 1.894+0.115−0.115 | 1.92+0.05−0.05 |
| NLTE (M2) | 5250+48−43 | −1.65+0.04−0.04 | 2.244+0.102−0.095 | 1.97+0.07−0.06 |
| NLTE + asteros. (M3) | 5246+76−50 | −1.60+0.03−0.03 | 2.366+0.020−0.021 | 1.95+0.09−0.06 |
Table 3: Median values and 68.2% confidence intervals for the stellar parameters from the posterior distributions for the three different likelihood functions (see main text for details). The method adopted throughout this work is M3.
Figure [4](#S3.F4 "Figure 4 ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") shows various representations of the multidimensional posterior distributions for M1, M2, and M3. As expected, we found strong correlations between Teff, [M/H], and logg in the purely spectroscopically informed methods M1 and M2. Using approach M3, we can effectively lift the degeneracies with logg as quantified by insignificant Pearson correlation coefficients (Fig. [4](#S3.F4 "Figure 4 ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). For each approach, we deduced the optimal parameters and error margins from the median, 15.9th, and 84.1th percentiles, respectively. These are listed in Table [3](#S3.T3 "Table 3 ‣ 3.3.1 Bayesian inference ‣ 3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). It is evident that M1 significantly underestimates both [M/H] and logg due to deducing lower Fe i abundances that have a direct impact on the ionization balance and therefore the inferred gravity. M2 and M3, however, yield results that are in good agreement with the strongest deviation amounting to just 1.2σ in logg. This highlights the importance of considering NLTE effects already at the stage of stellar parameter inference and shows that 1D NLTE ionization balance is capable of producing gravities that are as accurate as the highly trustworthy asteroseismic scaling relations. Since the precision of the latter is better by about a factor of five, we adopt the parameters inferred from M3 throughout this work. We corroborated this set of fundamental stellar parameters using several independent techniques, including temperatures from the shapes of the Balmer lines in HD 20’s spectrum. The reader is referred to Appendix [A](#A1 "Appendix A Alternative methods for determining stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") for a detailed outline and comparison.
###
3.4 Line broadening

Figure 6: Comparison of synthetic line shapes against the observed profiles in the UVES 580 spectrum for two representative Ti lines. Red spectra resemble 3D syntheses, while blue and orange colors indicate 1D syntheses with and without additional broadening. The instrumental profile (R=66230) was mimicked by convolution with a Gaussian kernel for all three types of synthesis. No rotational broadening was applied to the 3D syntheses.
Carney03 reported a rotational velocity of vrotsini=5.9 km s−1 for HD 20, which is unexpectedly high given the evolutionary state of this star where any initial rotation is expected to be eliminated. The authors caution, however, that the face value just below their instrumental resolution of 8.5 km s−1 might be biased due to a number of systematic influences on their method, amongst which is turbulent broadening (see also, e.g., Preston19). Turbulent and rotational broadening have almost identical impacts on the line shape, a degeneracy that can only be broken using spectra of very high resolution and S/N (Carney08). Hence – despite the name – we rather consider vrotsini a general broadening parameter.
Given that rotation or any other line broadening mechanism are key quantities that critically affect the precision and accuracy of abundances from spectrum synthesis (Sect. [4](#S4 "4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")), we tackled this property from a theoretical point of view. To this end, a collection of isolated Ti i and Ti ii features were simulated using LTE radiative transfer in a CO5BOLD model atmosphere (Freytag12), which realistically models the microphysics of stellar atmospheres under 3D, hydrodynamical conditions. We note that the chosen atmospheric parameters (Teff=5500 K, logg=2.5 dex, [M/H]=−2.0 dex) only roughly match our findings – hence deviations in the abundance scales can be expected. The overall line-shape, however, is expected to be reasonably accurately reproduced. Our synthetic profiles were compared to their observed counterparts in the UVES 580 spectrum, which offers the best trade-off between resolution and S/N in the considered wavelength regimes. The nominal velocity resolution is 4.5 km s−1. Comparisons for two representative lines are presented in Fig. [6](#S3.F6 "Figure 6 ‣ 3.4 Line broadening ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). The 3D profiles are shown next to rotationally broadened, 1D versions and we find that no additional rotational broadening is required in the 3D case as the line shape can be fully recovered by properly accounting for microphysics together with the instrumental resolution. Thus, we conclude that – if at all – HD 20 is rotating only slowly (i.e., vsini≲1 km s−1). On top of the overall line broadening, slight profile asymmetries are correctly reproduced by the 3D models.
In order to improve our 1D spectrum syntheses beyond broadening by the instrumental line spread function, we analyzed the deviation of individual, isolated Fe features from their 1D LTE line shape. The comparison was performed against the UVES 580 and the HARPS spectrum. Based on 171 lines in common for both spectra, we found that a broadening velocity of vmac=5.82±0.03 km s−1 can successfully mimic the line shape from both spectrographs. The latter value is in good agreement with the value 5.9 km s−1 found by Carney03, who do not list an error specific to HD 20 but quote general standard errors between 0.5 and 3 km s−1 for their entire sample of stars.
###
3.5 Other structural parameters
Given our spectroscopic temperature and metallicity, we can deduce HD 20’s luminosity through
| | | | |
| --- | --- | --- | --- |
| | LL⊙=(d10pc)2L0L⊙⋅10−0.4(V−A(V)+BCV(Teff,[Fe/H])) | | (6) |
with the zero-point luminosity L0 (see Table [2](#S2.T2 "Table 2 ‣ 2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) and the bolometric correction BCV from the calibration relation by Alonso99, which itself depends on Teff and [Fe/H]. We find L/L⊙=60.9+4.6−4.3, in line with the value 58.6±2.2 reported in Gaia DR2. The error on L was computed through a Monte Carlo error propagation assuming Gaussian error distributions for the input variables and an additional uncertainty for BCV of 0.05 mag. The asymmetric error limits stem from the 15.9 and 84.1 percentiles of the final parameter distributions, respectively.
We can furthermore infer the stellar radius using
| | | | |
| --- | --- | --- | --- |
| | RR⊙=√LL⊙(TeffTeff,⊙)−2 | | (7) |
resulting in 9.44+0.46−0.43. This compares to 8.69+0.19−0.80 from Gaia DR2, where the slight discrepancy can be explained by a higher temperature estimate from Gaia (see discussion in Sect. [A.1.3](#A1.SS1.SSS3 "A.1.3 Color - [Fe/H] - Teff calibrations ‣ A.1 Effective temperature ‣ Appendix A Alternative methods for determining stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")).
Finally, it is possible to deduce a mass estimate using the basic stellar structure equation
| | | | |
| --- | --- | --- | --- |
| | logmm\sun=loggg⊙−4logTeffTeff,\sun+0.4(Mbol,\sun−Mbol). | | (8) |
The involved solar reference values can be found in Table [2](#S2.T2 "Table 2 ‣ 2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). As for Eq. [6](#S3.E6 "(6) ‣ 3.5 Other structural parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), the bolometric magnitude Mbol can be computed from the V-band photometry and the BCV relation by AAM99. We find a mass of (0.76±0.08)M⊙.
4 Abundance analysis
---------------------
| | | | |
| --- | --- | --- | --- |
| | LTE | NLTE | |
| X | ⟨logϵ(X)⟩a | [X/Fe]b | n | ⟨logϵ(X)⟩a | [X/Fe] | n | logϵ(X)c\sun |
| | [dex] | [dex] | | [dex] | [dex] | | [dex] |
| Li i | <−0.34 | <0.41 | 1 | … | … | … | 1.05 |
| C (CH) | 6.25±0.05 | −0.38±0.07 | 1 | … | … | … | 8.43 |
| N (NH) | 6.21±0.10 | 0.18±0.11 | 1 | … | … | … | 7.83 |
| O i | 7.79±0.18 | 0.70±0.18 | 2 | 7.79±0.18 | 0.70±0.18 | 2 | 8.69 |
| Na i | 4.50±0.10 | 0.06±0.11 | 2 | 4.42±0.10 | −0.14±0.12 | 2 | 6.24 |
| Mg i | 6.25±0.10 | 0.45±0.11 | 3 | 6.24±0.10 | 0.32±0.12 | 3 | 7.60 |
| Al i | 4.54±0.15 | −0.11±0.16 | 5 | 4.68±0.06 | −0.09±0.09 | 5 | 6.45 |
| Si i | 6.21±0.13 | 0.50±0.14 | 16 | 6.23±0.03 | 0.40±0.08 | 5 | 7.51 |
| Si ii | 6.49±0.10 | 0.58±0.10 | 2 | … | … | … | 7.51 |
| S i | 6.03±0.04 | 0.71±0.06 | 4 | … | … | … | 7.12 |
| K i | 4.15±0.10 | 0.92±0.11 | 2 | 3.60±0.10 | 0.25±0.12 | 2 | 5.03 |
| Ca i | 4.92±0.06 | 0.38±0.08 | 25 | … | … | … | 6.34 |
| Sc ii | 1.66±0.06 | 0.11±0.06 | 11 | … | … | … | 3.15 |
| Ti i | 3.34±0.10 | 0.19±0.11 | 20 | 3.81±0.10 | 0.54±0.13 | 15 | 4.95 |
| Ti ii | 3.67±0.10 | 0.31±0.10 | 28 | … | … | … | 4.95 |
| V i | 2.07±0.12 | −0.06±0.13 | 4 | … | … | … | 3.93 |
| V ii | 2.50±0.10 | 0.16±0.10 | 2 | … | … | … | 3.93 |
| Cr i | 3.67±0.12 | −0.17±0.13 | 12 | … | … | … | 5.64 |
| Cr ii | 4.25±0.15 | 0.21±0.15 | 15 | … | … | … | 5.64 |
| Mn i | 3.33±0.08 | −0.30±0.10 | 4 | 3.73±0.08 | −0.02±0.11 | 4 | 5.43 |
| Fe i | 5.70±0.07 | 0.00±0.09 | 133 | 5.82±0.10 | 0.00±0.13 | 133 | 7.50 |
| Fe ii | 5.90±0.03 | 0.00±0.04 | 13 | 5.90±0.03 | 0.00±0.04 | 13 | 7.50 |
| Co i | 3.25±0.11 | 0.06±0.12 | 6 | 3.61±0.10 | 0.30±0.12 | 3 | 4.99 |
| Ni i | 4.43±0.07 | 0.01±0.09 | 32 | … | … | … | 6.22 |
| Cu i | 1.76±0.10 | −0.63±0.11 | 3 | … | … | … | 4.19 |
| Zn i | 2.88±0.10 | 0.12±0.11 | 2 | … | … | … | 4.56 |
| Ga i | 1.03±0.20 | −0.21±0.21 | 1 | … | … | … | 3.04 |
| Rb i | <1.52 | <0.45 | 1 | … | … | … | 2.87 |
| Sr i | 1.00±0.10 | −0.07±0.11 | 1 | 1.40±0.10 | 0.21±0.12 | 1 | 2.87 |
| Sr ii | 1.50±0.26 | 0.23±0.26 | 1 | … | … | … | 2.87 |
| Y ii | 0.54±0.09 | −0.07±0.09 | 7 | … | … | … | 2.21 |
| Zr ii | 1.26±0.09 | 0.28±0.09 | 5 | 1.41±0.10 | 0.42±0.10 | 2 | 2.58 |
| Mo i | 0.48±0.10 | 0.40±0.11 | 1 | … | … | … | 1.88 |
| Ru i | 0.55±0.10 | 0.60±0.11 | 1 | … | … | … | 1.75 |
| Rh i | −0.19±0.40 | 0.70±0.40 | 1 | … | … | … | 0.91 |
| Pd i | −0.12±0.20 | 0.11±0.21 | 1 | … | … | … | 1.57 |
| Ag i | −0.29±0.20 | 0.57±0.21 | 1 | … | … | … | 0.94 |
| Ba ii | 0.93±0.10 | 0.35±0.10 | 2 | 0.74±0.10 | 0.16±0.10 | 2 | 2.18 |
| La ii | −0.09±0.06 | 0.41±0.06 | 6 | … | … | … | 1.10 |
| Ce ii | 0.19±0.03 | 0.21±0.04 | 11 | … | … | … | 1.58 |
| Pr ii | −0.35±0.10 | 0.53±0.10 | 3 | … | … | … | 0.72 |
| Nd ii | 0.21±0.06 | 0.39±0.06 | 22 | … | … | … | 1.42 |
| Sm ii | −0.06±0.04 | 0.58±0.05 | 12 | … | … | … | 0.96 |
| Eu ii | −0.35±0.10 | 0.73±0.10 | 3 | … | … | … | 0.52 |
| Gd ii | 0.04±0.15 | 0.56±0.15 | 5 | … | … | … | 1.07 |
| Tb ii | −0.74±0.10 | 0.55±0.10 | 2 | … | … | … | 0.30 |
| Dy ii | 0.21±0.07 | 0.70±0.07 | 4 | … | … | … | 1.10 |
| Ho ii | −0.49±0.10 | 0.63±0.10 | 2 | … | … | … | 0.48 |
| Er ii | −0.04±0.09 | 0.64±0.09 | 4 | … | … | … | 0.92 |
| Tm ii | −0.87±0.00 | 0.63±0.02 | 4 | … | … | … | 0.10 |
| Yb ii | −0.06±0.20 | 0.70±0.20 | 1 | … | … | … | 0.84 |
| Lu ii | −0.57±0.10 | 0.93±0.10 | 1 | … | … | … | 0.10 |
| Hf ii | −0.23±0.10 | 0.52±0.10 | 3 | … | … | … | 0.85 |
| Os i | 0.40±0.10 | 0.80±0.11 | 2 | … | … | … | 1.40 |
| Ir i | 0.42±0.10 | 0.84±0.11 | 1 | … | … | … | 1.38 |
| Pb i | <0.37 | <0.42 | 1 | … | … | … | 1.75d |
| Th ii | −0.85±0.10 | 0.73±0.10 | 1 | … | … | … | 0.02 |
| U ii | <−1.21 | <0.93 | 1 | … | … | … | -0.54 |
999
$a$$a$footnotetext: For n≥4, the error is considered as the mad of the line-by-line abundance distribution scaled by the factor 1.48 to be concordant with a normal distribution. Otherwise, a floor error of 0.10 dex is assumed (see main text for details).
$b$$b$footnotetext: With the exception of O i, [X/Fe]LTE is given relative to the LTE abundance of the Fe species at the same ionization stage.
$c$$c$footnotetext: The solar reference abundances are from Asplund09.
$d$$d$footnotetext: The Pb abundance was taken from meteoroids.
Table 4: Final adopted abundances.
The abundances presented here were computed using either EWs (Sect. [4.2](#S4.SS2 "4.2 Equivalent widths ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) or spectrum synthesis for such cases where blending was found to be substantial. For this purpose we employed the spectra providing the highest S/N at any given wavelength, that is, UVES 390 blueward of ∼4300 Å, MIKE blue for 4300≲λ≲5000 Å, and MIKE red in the regime 5000≲λ≲8000 Å (cf. Fig. [1](#S2.F1 "Figure 1 ‣ 2.1 Spectroscopic observations ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Despite the circumstance that MIKE reaches substantially more redward, we do not consider it there because of considerable fringing. The radiation transfer was solved using MOOG and an ATLAS9 model for our exact specifications (previous Sects. and Table [2](#S2.T2 "Table 2 ‣ 2.3 Photometry and astrometric information ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) that was constructed via interpolation. Our computations involved molecular equilibrium computations involving a network consisting of the species H2, CH, NH, OH, C2, CN, CO, N2, NO, O2, TiO, H2O, and CO2. Individual, line-by-line abundances can be found in Table [8](#A3.T8 "Table 8 ‣ Appendix C Additional tables ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), while we summarize the adopted final abundances and their associated errors in Table [4](#S4.T4 "Table 4 ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). In order to reduce the impact of outliers, abundances were averaged using the median. For ensembles of four and more lines, we computed the corresponding errors via the median absolute deviation (mad) which is scaled by the factor 1.48 in order to be conform with Gaussian standard errors. As noted already in Sect. [3.3](#S3.SS3 "3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), for the vast majority of species, the magnitude of the line-by-line scatter is inconsistent with merely the propagation of random spectrum noise, but accounts for additional – possibly systematic – sources of error further down in the abundance analysis. Consequently, we set a floor uncertainty of 0.10 dex for those species with less than four available lines, where the mad would not be a robust estimator for the scatter. For a discussion of this as well as of influences from uncertain stellar parameters, we refer the reader to Appendices [B.1](#A2.SS1 "B.1 Instrument-induced versus other systematics ‣ Appendix B Abundance systematics ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") and [B.2](#A2.SS2 "B.2 Impacts of model atmosphere errors ‣ Appendix B Abundance systematics ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). For elements with only one line measured with the line abundance uncertainty alone exceeding the floor error, we adopted the error on the line abundance instead.
###
4.1 Line list
Suitable lines for an abundance analysis of HD 20 were compiled and identified using literature atomic data. We retrieved all line data that are available through the Vienna Atomic Line Database (VALD, Piskunov95; Ryabchikova15) in the wavelength range from 3280 Å to 8000 Å, representing the combined wavelength coverage of the spectra at hand. In a first run, we synthesized a spectrum from this line list and discarded all profiles that did not exceed a line depth of 0.1% of the continuum level. The remaining features were visually checked for their degree of isolation and usability by comparing the observed spectra with syntheses with varying elemental abundances. The resulting list with the adopted line parameters and original sources thereof can be found in Table [8](#A3.T8 "Table 8 ‣ Appendix C Additional tables ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). Additional hyperfine structure (HFS) line lists were considered for the elements Li (Hobbs99), Sc (Kurucz95), V (Lawler14), Mn (DenHartog11), Co (Kurucz95), Cu (Kurucz95), Ag (Hansen12), Ba (McWilliam98), La (Lawler01a), Pr (Sneden09), Eu (Lawler01b), Tb (Lawler01c), Ho (Lawler04), Yb (Sneden09), and Lu (LSCI).
###
4.2 Equivalent widths
The majority of the spectral features identified to be suitable for our analysis are sufficiently isolated so that an EW analysis could be pursued. We measured EWs from the spectra of all three spectrographs using our own semi-automated Python tool EWCODE (Hanke17). In brief, EWCODE places a local, linear continuum estimate that is based on the neighboring wavelength ranges next to the profile of interest and fits Gaussian profiles. The user is prompted with the fit and can interactively improve the fit by, for example, introducing additional blends or refining the widths of the continuum ranges. Our measurements for individual lines along with EWCODE’s error estimates are listed in Table [8](#A3.T8 "Table 8 ‣ Appendix C Additional tables ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).").
###
4.3 Notes on individual elements
In the following, we will comment in detail on the analysis of abundances from several features that needed special attention exceeding the standard EW or spectrum synthesis analysis. Furthermore, whenever available, we comment on NLTE corrections that were applied to the LTE abundances.
####
4.3.1 Lithium (Z = 3)
The expected strongest feature of Li i is the resonance transition at 6707.8 Å. Despite our high-quality data, within the noise boundaries, the spectrum of HD 20 appears perfectly flat with no feature identifiable whatsoever. For the region in question we estimate from our MIKE spectrum S/N≈1050 pixel−1, which would allow for 3σ detections of Gaussian-like features with EWs of at least 0.3 mÅ as deduced from the formalism provided in Battaglia08. The latter EW translates into an upper limit logϵ(Li)<−0.34 dex.
####
4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8)

Figure 7: C abundance and 12C/13C from the CH G-band in the UVES 390 spectrum. Left panel: Region around the two features that are dominated by 13CH, one of which is used to pinpoint 12C/13C (blue rectangle). The bluer feature at ∼4230 Å was not considered due to an unidentified blend (see main text). The observed spectrum is represented by black dots connected by gray lines and the best-fit synthesis (red) and its abundance error margin of 0.05 dex are depicted in blue, respectively. The dashed spectrum shows a synthesis without any C. Right panel: Same as left panel but in the range used to constrain the C (CH) abundance.

Figure 8: Two-dimensional representation of the MCMC sample used to fit logϵ(C) and 12C/13C simultaneously including the marginal distributions. Median values and asymmetric limits are displayed by dashed lines.
Our C abundances are based on synthesis of the region around the CH G-band at ∼4300 Å with molecular line data for 12CH and 13CH from Masseron14. We identified a range between 4310.8 Å and 4312.1 Å that in HD 20 is almost devoid of atomic absorption and hence is ideal for CH synthesis irrespective of other elemental abundances. We show this range in Fig. [7](#S4.F7 "Figure 7 ‣ 4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). Only very substantial changes in the model isotopic ratio 12C/13C have a notable effect on this region, manifesting mostly in an effective blue- or redshift of the molecular features. In contrast, the two 13CH profiles near ∼4230 Å (left panel of Fig. [7](#S4.F7 "Figure 7 ‣ 4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) are rather sensitive to the isotopic ratio. As cautioned by Spite06, the blueward profile has a dominant blend they attribute to an unidentified transition from an r-process element. Given the r-process-rich nature of our star, we do not consider this feature here. Employing both ranges, one for the C abundance and one for 12C/13C, the two measures can be effectively decoupled as can be seen in Fig. [8](#S4.F8 "Figure 8 ‣ 4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), where we present the results of an MCMC sampling run used to draw from the posterior distribution of the fitted parameters in the regions indicated in Fig. [7](#S4.F7 "Figure 7 ‣ 4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). From this distribution we determine 12C/13C=3.92+1.68−0.98. Though nominally less, an error of 0.05 dex was adopted for logϵ(C)=6.25 dex in order to account for the circumstance that the continuum level in the right-hand spectrum had to be established from a region more than one Å away on either side, thus introducing a slight normalization uncertainty.

Figure 9: Same as right panel of Fig. [7](#S4.F7 "Figure 7 ‣ 4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), but for a synthesis of the NH-band at ∼3360 Å. A synthesis without any N is indicated by the black dashed curve. The blue error range corresponds to an abundance variation of ±0.10 dex.
We determined the N abundance in a similar fashion employing the NH-band at ∼3360 Å (see Fig. [9](#S4.F9 "Figure 9 ‣ 4.3.2 Carbon, nitrogen, and oxygen (Z = 6, 7, and 8) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). From our synthesis we inferred logϵ(N)=6.21±0.10 dex. The present data do not permit the determination of the isotopic ratio 14N/15N.
Unfortunately, the frequently used [O i] line at 6300.3 Å is strongly blended with telluric absorption features in all available spectra and hence rendered useless for precise abundance studies. Nonetheless, the high S/N of the MIKE spectra allowed for the measurement of the much weaker [O i] transitions at 5577.3 Å and 6363.8 Å, from which we deduced a mean abundance of logϵ(O\textsci)LTE=7.79±0.18 dex, or [O/Fe]=0.70 dex. The forbidden lines ought to have negligible LTE corrections, because they have metastable upper levels. Hence, the collisional rate is higher than the radiative rate and LTE is obtained, in other words logϵ(O\textsci)NLTE=logϵ(O\textsci)LTE. Severe changes in the O abundance result in non-negligible effects on the molecular equilibrium, in particular through their impact on the formation of CO. For this reason, the overabundance found here was considered in all syntheses, including the ones for CH and NH outlined above.
We note here that abundances from the O triplet at ∼7773 Å could be firmly detected and are listed in Table [8](#A3.T8 "Table 8 ‣ Appendix C Additional tables ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). However, we discard them (logϵ(O\textsci)LTE≈8.22 dex) from consideration in this work, since they are in strong disagreement to the abundances from the forbidden lines. The formation of the lines in question is subject to considerable NLTE effects as shown by, for example, Sitnova13. Using the MPIA NLTE spectrum tools101010<http://nlte.mpia.de/gui-siuAC_secE.php> to retrieve corrections for individual line abundances, we found an average 1D NLTE bias of −0.14 dex, which is not enough to erase the discrepancy. We therefore suspect much stronger effects when considering line formation in NLTE using 3D dynamical models (e.g., Amarsi19).
####
4.3.3 Sodium (Z = 11)
Equivalent widths from the two weak Na lines at 5682 Å and 5688 Å were employed to compute an abundance of logϵ(Na)LTE=4.50±0.10 dex. We emphasize the artificial increase of the latter uncertainty to 0.10 dex as discussed earlier. According to the INSPECT database111111[www.inspect-stars.com](http://www.inspect-stars.com) (Lind11), for these lines and HD 20’s parameters a mean NLTE correction of −0.08 dex should be applied, leading to logϵ(Na)NLTE=4.42 dex and consequently [Na/Fe]=−0.14 dex. The frequently used Na i transitions at 6154 Å and 6160 Å could not be firmly detected in any of our spectra owing to HD 20’s rather high temperature, which strongly reduces the strength of these lines.
####
4.3.4 Magnesium (Z = 12)
The three Mg i lines employed for abundance determinations in this work were corrected for departures from the LTE assumptions by means of the MPIA NLTE spectrum tools, which is based on Bergemann17a; Bergemann17b. The mean correction is only +0.04 dex, indicating that the effects are not severe for the selected lines.
####
4.3.5 Aluminum (Z = 13)
Our Al abundance for HD 20 is based on five neutral transitions. While spectrum syntheses revealed the 3944 Å profile to be severely blended, the other strong UV resonance feature at 3961 Å was found to be sufficiently isolated for getting a robust abundance. In addition, the high S/N of our MIKE spectrum allowed for the detection of two pairs of weak, high-excitation lines at ∼6697 Å and ∼7835 Å, respectively. In LTE, there is a considerable difference of almost 1 dex between the abundances from the resonance line (logϵ(Al)LTE=3.58 dex), and the four weak lines (logϵ(Al)LTE=4.54 dex). As shown by Nordlander17, this can be explained by substantial NLTE effects on Al line formation in metal-poor giants like HD 20. Indeed, by interpolation in their pre-computed grid, we found corrections of 1.02 dex for the strong line and 0.14 to 0.20 dex for the weak lines, which alleviates the observed discrepancy. We emphasize that [Al/Fe] (Table [4](#S4.T4 "Table 4 ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) remains unaltered by going from LTE to NLTE, because both the Fe i transitions and the majority of our Al i lines experience the same direction and magnitude of corrections. We note here that Barklem05 report on a strong depletion in LTE of [Al/Fe]=−0.80 dex (on the scale of Asplund09) based on the UV resonance line, only. Hence, that finding at face value should be treated with caution since severe NLTE biases can be expected.
####
4.3.6 Silicon (Z = 14)
Five of our 16 Si i lines with measured EWs have a correspondence in the MPIA NLTE database (Bergemann13). The deduced corrections for HD 20’s stellar parameters are marginal at a level of −0.01 to −0.04 dex. As a consequence, the ionization imbalance of −0.28 dex between Si i and Si ii that prevails in LTE cannot be compensated this way. Lacking NLTE corrections for our two Si ii transitions, however, we cannot draw definite conclusions at this point.
####
4.3.7 Sulfur (Z = 16)
We detected in total four S features that are spread over two wavelength windows at ∼4695 Å and ∼6757 Å, corresponding to the second and eighth S i multiplet. Using spectrum synthesis, we found a mean abundance logϵ(S\textsci)LTE=6.03±0.04 dex that is mainly driven by the strongest profile at 4694.1 Å. Concerning influences of NLTE on S i, in the literature there is no study dealing with the second multiplet. For the eighth multiplet, however, Korotin08; Korotin09 and Korotin17 showed that the expected corrections for HD 20 are minor and remain well below 0.10 dex. Since we detected no considerable difference in our LTE analysis between the eighth and second multiplet, we conclude that the correction – if any – for the second multiplet is probably small, too.
####
4.3.8 Potassium (Z = 19)
The K abundance presented here is based on the EWs of two red resonance lines at 7665 Å and 7699 Å, respectively. These lines are expected to be subject to severe departures from LTE. Mucciarelli17 showed for giants in four globular clusters that the magnitude of the NLTE correction strongly increases with increasing Teff, logg, and logϵ(K)LTE. One of their clusters, NGC 6752, exhibits a similar metallicity (−1.55 dex) as HD 20 and we estimate from their Fig. 3 a correction of our LTE abundance of at least −0.5 to −0.6 dex. For our adopted NLTE abundance (Table [4](#S4.T4 "Table 4 ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) we assume a shift by −0.55 dex.
####
4.3.9 Titanium (Z = 22)
Our LTE analysis of Ti lines shows an ionization imbalance of (logϵ(Ti\textsci)−logϵ(Ti\textscii))LTE=−0.33 dex. We have determined line-by-line NLTE corrections for our Ti i abundances from the grid by Bergemann11 amounting to values ranging from +0.4 to +0.6 dex. It is noteworthy that corrections to Ti ii are insignificant in the present regime of stellar parameters (cf., Bergemann11). The newly derived NLTE abundances switch the sign of the ionization imbalance with a reduced amplitude ((logϵ(Ti\textsci)−logϵ(Ti\textscii))NLTE=+0.14 dex). Inconsistencies in other metal-poor stars manifesting themselves in ionization imbalances even in NLTE have already been noted by Bergemann11 and were explained by inaccurate or missing atomic data. More recently, Sitnova16 found lower NLTE corrections and therefore weaker – but still non-zero – ionization imbalances for stars in common with Bergemann11, which they mainly attributed to the inclusion of high-excitation levels of Ti i in their model atom. In light of prevailing uncertainties of Ti i NLTE calculations, we do not believe that the ionization imbalance of Ti contradicts our results from Sect. [3.3](#S3.SS3 "3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).").
####
4.3.10 Manganese (Z = 25)
Following Bergemann08, our four abundances from Mn i lines should experience a considerable mean NLTE adjustment of +0.40 dex and thus are consistent with a solar [Mn/Fe]. More recently, Mishenina15 casted some doubt on the robustness of the aforementioned NLTE calculations by showing the absence of systematic discrepancies in LTE between multiplets that according to Bergemann08 ought to have different NLTE corrections. Nonetheless, Bergemann19 corroborated the strong NLTE corrections found in the earlier study. Moreover, the authors remark that Mn i transitions at a lower excitation potential of more than 2 eV are not strongly affected by convection – that is 3D effects – and are recommended as 1D NLTE estimator. Since the latter is satisfied for all of our four used Mn lines, our 1D NLTE abundance ought to be an accurate estimate.
###
4.4 Cobalt (Z = 27)
The Co NLTE corrections were obtained from Bergemann10. For three out of the six measured lines corrections are available and amount to +0.46 dex on average.
####
4.4.1 Copper (Z = 29)
We measured three profiles of Cu i in our spectra, two of which originate from low-excitation (∼1.5 eV) states. Albeit for dwarfs, at [Fe/H]∼−1.5 dex, Yan15 predicted for these two transitions at 5105.5 Å and 5782.2 Å stronger NLTE corrections compared to the ones for our high-excitation (∼3.8 eV) line. This is somewhat reflected in our LTE abundances where the lower-excitation lines yield a lower value by about 0.3 dex. Lacking a published pre-computed grid, it is hard to predict the exact amount of NLTE departures for our giant star and its temperature. Yet, Shi18 and Korotin18 showed that the corrections correlate much stronger with [Fe/H] than they do with logg or Teff. We make no attempt to rectify our Cu abundances at this point, but judging from the literature we note that the corrections are probably on the order of +0.2 dex for the low-excitation- and +0.1 dex for the high-excitation lines.
####
4.4.2 Strontium (Z = 38)
In principle, our spectra cover the UV resonance lines of Sr ii at 4077 Å and 4215 Å, though we found those to be strongly saturated and we could not reproduce the line shape through LTE synthesis. Furthermore, the lines in question are subject to a substantial degree of blending by several atomic and molecular transitions (see also Andrievsky11). Fortunately, it was possible to measure EWs of the much weaker lines at 4607 Å (Sr i) and at 4161 Å (Sr ii). For these we deduced abundances of 1.00 dex and 1.50 dex, respectively, which indicates a substantial discrepancy between the two ionization stages. The latter can be attributed to considerable NLTE departures for the neutral transition. Bergemann12121212Bergemann12 mention a Sr ii line at 4167.8 Å in their Table 1. However the line parameters provided are for the line at 4161.8 Å. NLTE corrections are not provided for this transition. and Hansen13 performed extensive NLTE calculations for this line from which we extract a correction of +0.4 dex for HD 20’s stellar parameters. Thus, the observed difference is effectively erased, although we emphasize the lack of published Sr ii corrections for the line and stellar parameters in question, which – in turn – may re-introduce a slight disagreement.
####
4.4.3 Zirconium (Z = 40)
Two out of our five measured Zr ii lines were investigated for NLTE effects by Velichko10. The authors note that departures mainly depend on metallicity and gravity, whereas there is only a weak coupling to Teff. From their published grid of corrections we extrapolate corrections of 0.15 dex and 0.18 dex for our abundances from the lines at 4209.0 Å and 5112.3 Å, respectively.
####
4.4.4 Barium (Z = 56)
In HD 20, the Ba ii profile at 4554 Å is strongly saturated and thus largely insensitive to abundance. We further excluded the 6141 Å line because of blending by an Fe feature. Our abundance hence is based on synthesis of the two clean and only moderately strong transitions at 5853 Å and 6496 Å, yielding logϵ(Ba\textscii))LTE=0.77 dex and 1.09 dex, respectively. In light of the recent work on NLTE line formation by Mashonkina19, the presented disagreement can be expected in LTE, as in our parameter regime NLTE corrections for the two lines differ. Indeed, interpolation in their published grid131313The grid does not reach down to logg=2.37 dex, but instead ends at logg=3.0 dex. Consequently, a linear extrapolation was performed. We note, however, that this seems uncritical since gravity is not a governing parameter in the considered regime. resulted in corrections of −0.10 dex and −0.27 dex, hence reducing the gap to 0.15 dex, which can be explained by the combined statistical uncertainties.
####
4.4.5 Lutetium (Z = 71)

Figure 10: Synthesis of the Lu ii line at 6221.9 Å. The red line represents the best abundance match with an error of 0.1 dex (blue shaded region). The broad range of HFS components for 175Lu from LSCI are indicated by vertical orange lines at the top and have been taken into account for this synthesis. The impact of the negligence of HFS on the line shape is indicated by the blue line.
The very high S/N of about 1000 pixel−1 in the MIKE spectrum in concert with an overall high Lu abundance ([Lu/Fe]=0.93 dex) allowed for a solid detection (4.7 mÅ) of the otherwise very weak Lu ii profile at 6221.9 Å. We mention the line here explicitly, because it was found to have an exceptionally pronounced HFS structure as we show in Fig. [10](#S4.F10 "Figure 10 ‣ 4.4.5 Lutetium (Z = 71) ‣ 4.4 Cobalt (Z = 27) ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") where two syntheses are compared; one including HFS and one neglecting it. The line components were taken from LSCI. We note that we consider only the 175Lu isotope here, because the only other stable isotope, 176Lu, is expected to be a minority component judging from its solar fractional abundance (2.59%, LSCI). Despite the considerable additional line broadening due to atmospheric effects (Sect. [3.4](#S3.SS4 "3.4 Line broadening ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")), hyperfine splitting is still the dominant source of broadening, thus highlighting the importance of including it in our analysis.
####
4.4.6 Upper limits on rubidium, lead, and uranium (Z = 37, 82, and 92)

Figure 11: Upper limit on Rb from the Rb i line at 7800.3 Å. The red model denotes the adopted upper limit of +1.52 dex, whereas blue lines are syntheses with Rb abundances successively increased by 0.2 dex.

Figure 12: Same as Fig. [11](#S4.F11 "Figure 11 ‣ 4.4.6 Upper limits on rubidium, lead, and uranium (Z = 37, 82, and 92) ‣ 4.4 Cobalt (Z = 27) ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") but for the Pb i transition at 4057.8 Å and an upper limit of +0.37 dex.

Figure 13: Same as Fig. [11](#S4.F11 "Figure 11 ‣ 4.4.6 Upper limits on rubidium, lead, and uranium (Z = 37, 82, and 92) ‣ 4.4 Cobalt (Z = 27) ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") but for the U ii feature at 3859.6 Å and an upper limit of −1.21 dex.
For Rb, Pb, and U it was not possible to obtain solid detections despite the high-quality spectra at hand. Nonetheless, we could estimate reasonable upper limits based on the lines at 7800.3 Å (Rb i), 4057.8 Å (Pb i), and 3859.6 Å (U ii). Since there is a considerable amount of blending by a variety of species involved in shaping the spectrum in the three wavelength regimes, we cannot estimate the upper limit in the same way as for Li (Sect. [4.3.1](#S4.SS3.SSS1 "4.3.1 Lithium (Z = 3) ‣ 4.3 Notes on individual elements ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Thus, we used synthesis at varying abundances of the target elements in order to establish the highest abundance that is still consistent with the noise level present in the spectral regions (Figs. [11](#S4.F11 "Figure 11 ‣ 4.4.6 Upper limits on rubidium, lead, and uranium (Z = 37, 82, and 92) ‣ 4.4 Cobalt (Z = 27) ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), [12](#S4.F12 "Figure 12 ‣ 4.4.6 Upper limits on rubidium, lead, and uranium (Z = 37, 82, and 92) ‣ 4.4 Cobalt (Z = 27) ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), and [13](#S4.F13 "Figure 13 ‣ 4.4.6 Upper limits on rubidium, lead, and uranium (Z = 37, 82, and 92) ‣ 4.4 Cobalt (Z = 27) ‣ 4 Abundance analysis ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). This way, we found logϵ(Rb)<1.52 dex, logϵ(Pb)<0.37 dex, and logϵ(U)<−1.21 dex, respectively.
5 Results and Discussion
-------------------------
###
5.1 Light elements (Z ≤ 8)

Figure 14: Comparison of CNO elemental abundances of mixed and unmixed stars with HD 20 shown in blue for comparison. Gray circles resemble the study by Gratton00 while red triangles indicate the stars published in Spite05; Spite06. Two C-rich stars were excluded from the latter sample. Lower limits on 12C/13C are indicated by upward pointing arrows and the classification into mixed and unmixed stars according to the authors are represented by open and filled symbols, respectively. The red line in the lower panel mimics the linear relation between [N/O] and [O/H] for mixed stars as reported by Spite05, whereas the dashed line extrapolates the same relation to higher values of [O/H].
Our Li, C, and N abundances show imprints of a pattern that is commonly attributed to internal mixing occuring when a star reaches the RGB bump where processed material from the H-burning shell gets dredged up to the convective layer. Observationally, the effect can be seen in the stellar surface abundances of bright giants (brighter than the RGB-bump at logL/L⊙∼1.8, e.g., Gratton00) and horizontal branch stars that show non-detections of Li and depletions of [C/Fe] in lockstep with low 12C/13C ratios and enhancements in [N/Fe]. Indeed, for HD 20 we could not detect Li and found [C/Fe]=−0.38 dex, a value that is representative for the samples of mixed stars by Gratton00 and Spite06. On the other hand, as can be seen in Fig. [14](#S5.F14 "Figure 14 ‣ 5.1 Light elements (Z ≤ 8) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), the marginal enhancement in [N/Fe] (0.18±0.11 dex) and as a consequence the comparatively high [C/N] (−0.56 dex) render HD 20 at rather extreme positions among the mixed populations. A further puzzling observation is the strong O overabundance of [O/Fe]=0.70 dex that places HD 20 slightly below the general trend of [N/O] with [O/H] by Spite05 that appears generic for mixed stars (lower panel of Fig. [14](#S5.F14 "Figure 14 ‣ 5.1 Light elements (Z ≤ 8) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). We lack a suitable explanation for a mechanism that could produce such large O excesses. Deep mixing with O-N cycle material can be ruled out as origin, as the O-N cycle would produce N at the expense of O and therefore show depletions – which is exactly the opposite of the observed O enhancement.
###
5.2 HD 20’s evolutionary state

Figure 15: Kiel diagram (upper panel) and Hertzsprung-Russell diagram (lower panel) with isochrones and helium burning tracks. HD 20’s position is depicted by a blue filled circle with error bars. In the upper panel the error on the gravity is smaller than the circle size. The red line represents a He-normal 11 Gyr isochrone at [Fe/H]=−1.60 dex, and [α/Fe]=+0.4 dex with age and metallicity error margins shown by orange and blue ranges. The RGB luminosity bump for this particular model at logL/L⊙∼2.0 is highlighted in the lower panel by an arrow and the label “LB”. The light blue curve is a model with the same parameters except for Y=0.4. He-burning tracks for three different masses are shown by gray lines of different line styles with the stellar masses being indicated next to the respective tracks.
Earlier works on HD 20 assumed it to be a red horizontal branch star (e.g., Gratton00; Carney03). Given our newly derived set of fundamental parameters, we can neither reject nor confirm this hypothesis. In Fig. [15](#S5.F15 "Figure 15 ‣ 5.2 HD 20’s evolutionary state ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), we illustrate HD 20’s position in the space of the structural parameters Teff, logL/L⊙, and logg together with an isochrone from the Dartmouth Stellar Evolution Database (Dotter08). The model parameters were selected to resemble the findings in the present work, that is, an age of 11 Gyr (Sect. [5.6](#S5.SS6 "5.6 Cosmochronological age ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")), [Fe/H]=−1.60 dex, as well as [α/Fe]=0.40 dex (Sect. [5.3](#S5.SS3 "5.3 Abundances up to Zn (11 ≤ Z ≤ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). The impacts from uncertainties in the two input parameters that affect the isochrone most – the stellar age and [Fe/H] – are indicated by representative error margins. While we adopted a standard scaling for the He mass fraction (Y=0.245+1.5⋅Z) for the latter model, we furthermore show the case of an extreme He enhancement of Y=0.4. In addition, a set of He-burning tracks for three different stellar masses (0.70, 0.85, and 0.9M\sun) from the Dartmouth database are depicted in the same plot.
Given its luminosity and/or gravity (logL/L⊙=1.78 and logg=2.366), HD 20 appears too warm for a ∼11 Gyr old classical red giant, though the implied mass from the isochrone of 0.84M\sun resides within one standard deviation of our mass estimate (0.76±0.08M\sun). On the other hand, taking our asteroseismic mass and L for granted, HD 20 would be between 250 K and 350 K too cool to be consistent with the models for the horizontal branch, depending on whether a one-sigma or spot-on agreement is desired. This appears infeasible even for slightly warmer photometric temperature scales (Appendix [A.1.3](#A1.SS1.SSS3 "A.1.3 Color - [Fe/H] - Teff calibrations ‣ A.1 Effective temperature ‣ Appendix A Alternative methods for determining stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Still, the circumstance that our star is significantly fainter than the luminosity bump of the presented isochrone at logL/L⊙∼2.0 while nonetheless exhibiting mixing signatures (see previous Sect.) points towards a scenario where HD 20 has already evolved all the way through the red giant phase and is in fact now a horizontal branch star.
An alternative hypothesis for explaining HD 20’s position in the Hertzsprung-Russell diagram would be a non-standard He content as the model with strongly increased Y poses a considerably better fit to the observations. Such extreme levels of He have been found for second-generation stars in the most massive globular clusters (Milone18; Zennaro19). Nevertheless, characteristic chemical signatures of these peculiar stars are strong enhancements in light elements such as N, Na, and Al in lockstep with depletions of O and Mg (e.g., Bastian18); none of which were found here (see Sects. [5.1](#S5.SS1 "5.1 Light elements (Z ≤ 8) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") and [5.3](#S5.SS3 "5.3 Abundances up to Zn (11 ≤ Z ≤ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). As a consequence, it is unlikely that HD 20 is a classical red giant star with high Y.
Unfortunately, our TESS light curve of HD 20 cannot be used to analyze the period spacing of the l=1 mixed gravity and pressure modes to distinguish between helium-burning and non-helium-burning evolutionary stages as described by, for instance, Bedding11 and Mosser12b. For achieving this, a much longer time baseline than the available 27 days would be required in order to allow for a finer scanning of the frequencies around fmax and the identification of subordinate peaks in the power spectrum.
###
5.3 Abundances up to Zn (11 ≤ Z ≤ 30)

Figure 16: Comparison of HD 20 (blue circle) to the metal-poor field star compilation (gray dots) by Roederer14 and the red horizontal branch star HD 222925 (Roederer18, red circle). Dark blue circles and error bars indicate the result in LTE while the light blue circles indicate the NLTE-corrected ones. In the reference samples, corrections have been applied to O i, Na i, and K i. On the abscissa we show abundances from Fe ii since these are less prone to departures from the LTE assumption (Sect. [3.3](#S3.SS3 "3.3 Spectroscopic model atmosphere parameters ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")).

Figure 17: Residual abundance pattern from O to Zn between HD 20 and HD 222925 after scaling by the difference in logϵ(Fe\textscii) of 0.13 dex. NLTE abundances were used for both stars for Na i and K i (red filled circles).
We could deduce abundances for 22 species of 17 chemical elements in the range 8≤Z≤30. For the α-elements Mg, Si, S, Ca, and Ti we report a mean enhancement of [α/Fe]=0.45 dex in LTE, which is in disagreement with the finding by Barklem05 where a conversion to the Asplund09 scale yields 13[(Mg+Ti+Ca)/Fe]≈0.23 dex. The discrepancy is alleviated when using the same elements for comparison, that is, 13[(Mg+Ti\textsci+Ca)/Fe]=0.34±0.13 dex or 13[(Mg+Ti\textscii+Ca)/Fe]=0.38±0.07 dex. In light of Appendix [B.2](#A2.SS2 "B.2 Impacts of model atmosphere errors ‣ Appendix B Abundance systematics ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), the origin for the observed difference is likely to be tied to their substantially hotter Teff (see discussion in Sect. [A.1.3](#A1.SS1.SSS3 "A.1.3 Color - [Fe/H] - Teff calibrations ‣ A.1 Effective temperature ‣ Appendix A Alternative methods for determining stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Our value is typical for MW field stars at this [Fe/H] where nucleosynthetic processes in massive stars have played a dominant role in the enrichment of the ISM and supernovae of type Ia (mostly Fe-peak yields) have not yet started to contribute (e.g., McWilliam97). A minimum χ2 fit to the SN yields from Heger10 using StarFit141414<http://starfit.org/> (see Placco16; Chan17; Fraser17, for detailed discussions) shows that the lighter elements of HD 20 – in NLTE – can be well reproduced by a ∼11.6M\sun faint CCSN with an explosion energy of 0.6⋅1051 erg. We stress that at HD 20’s metallicity we are likely not dealing with a single SN enrichment. Nevertheless, we are looking for a dominant contribution, which might survive even if it is highly integrated over time.
Overall, we find an excellent agreement of the deduced abundances with the field population at similar metallicities as demonstrated in Fig. [16](#S5.F16 "Figure 16 ‣ 5.3 Abundances up to Zn (11 ≤ Z ≤ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), where our findings are overlayed on top of the sample of metal-poor stars by Roederer14. For elements with two available species we only present one representative. There are only two departures from the general trends: O and Co, which both are enhanced in comparison. However, as already noted in Roederer14, the reference sample shows trends with stellar parameters – most notably Teff – and thus evolutionary state. For elements heavier than N, mixing (Sect. [5.1](#S5.SS1 "5.1 Light elements (Z ≤ 8) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) cannot be responsible for these trends, hence indicating contributions from systematic error sources in the abundance analyses. We therefore compare HD 20 to HD~222925, a star that was recently studied in great detail by Roederer18 and found to occupy a similar parameter space (Teff=5636 K, logg=2.54 dex, and [Fe/H]=−1.47 dex). Its light-element abundances are also indicated in Fig. [16](#S5.F16 "Figure 16 ‣ 5.3 Abundances up to Zn (11 ≤ Z ≤ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") and we present a differential comparison in Fig. [17](#S5.F17 "Figure 17 ‣ 5.3 Abundances up to Zn (11 ≤ Z ≤ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."). After correcting for the difference in metallicity (0.13 dex), we find a remarkable match between the two stars in the considered range (reduced χ2 of 0.49). Similarities between the two stars have already been reported in the literature from a kinematical point of view (Roederer18a) and based on their metallicity (Barklem05; Roederer18). We emphasize, however, that the similarities do not extend to the neutron-capture regime, since HD 222925 is an r-II and HD 20 an r-I star with possible s-process contamination, as outlined in the following section.
###
5.4 Neutron-capture elements (Z ¿ 30)

Figure 18: Neutron-capture abundance pattern in LTE. Upper panel: HD 20’s heavy element abundances are indicated in blue. Shown in gray and red are abundances of the r-II star CS 22892-052 by Sneden03 and the r-I star BD +17 3248 by Cowan02 with updates from Cowan05 and Roederer10. The omitted Lu abundance for BD +17 3248 (see main text) is depicted in light red. Both patterns were scaled to achieve the overall best match to HD 20 in the entire considered range. The gray solid line denotes the solar-scaled r pattern from Sneden08 and the best-fit AGB model (see text) is represented by dotted lines. Middle panel: Residual pattern between HD 20 and the solar r pattern (gray line), CS 22892-052 (gray), and BD +17 3248 (red). Lower panel: Residual pattern after mixing a contribution from BD +17 3248 with s-process material from the AGB yield model.
In order to delineate the nucleosynthetic processes that contributed to the observed abundances of heavy elements (Z>30) in HD 20, we compare to a set of observed and predicted patterns. Following the classification scheme by Beers05, our findings of [Eu/Fe]=+0.73 dex and [Ba/Eu]=−0.38 dex place HD 20 in the regime of a typical r-I star. As indicated by the comparison in the top and middle panels of Fig. [18](#S5.F18 "Figure 18 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), HD 20’s heavy-element pattern from Nd to Ir (60≤Z≤77) is consistent with the scaled solar r-process by Sneden08 when considering observational errors. In the light neutron-capture regime from Sr to Ag (38≤Z≤47), however, the agreement is poor. This behavior is archetypal for r-process rich stars (e.g., Roederer18) and led to the postulation of the existence of an additional, low-metallicity primary production channel of yet to be identified origin (the so-called weak r or lighter element primary process, McWilliam98; Travaglio04; Hansen12; Hansen14a).
In Fig. [18](#S5.F18 "Figure 18 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), we further compare to the well-studied benchmark r-II and r-I stars CS~22892-052 (Sneden03) and BD~+17~3248 (Cowan02; Cowan05; Roederer10). The latter is a red horizontal branch star that is reasonably close to HD 20 in stellar parameter space (Teff=5200 K, logg=1.80 dex, [M/H]=−2.0 dex, vmic=1.9 km s−1) – a circumstance that effectively reduces the impact of systematics (e.g., due to NLTE effects, see also Appendix [B.2](#A2.SS2 "B.2 Impacts of model atmosphere errors ‣ Appendix B Abundance systematics ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) on differential comparisons. In our analyses, we omitted the Lu abundance for BD +17 3248 from the UV Lu ii line reported by Roederer10, because – regardless of the substantial quoted error of 0.3 dex – it appears to represent a strong, likely unphysical outlier. We stress that neither of the abundance patterns attributed to the two stars is necessarily a tracer of a pure nuclear process. In contrast, they are likely to represent integrated signatures with different contributions from both the main and weak primary r-components (cf., Li13; Hansen14a). The abundances in the range 38≤Z≤90 for the two reference stars were scaled such that the reduced sum of the normalized quadratic deviations, χ2r, was minimized (see middle panel of Fig. [18](#S5.F18 "Figure 18 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Both patterns reproduce the depression of Y between Sr and Zr ([Y/¡(Sr,Zr)¿]=−0.33 dex) and the deviation of Ag from the solar r-process. This points towards an enrichment contribution to HD 20 by the weak r-process as postulated earlier.
Nevertheless, the overall residual abundances from Sr to Zr as well as from Ba to Pr appear enhanced with respect to the heavy r-nuclei (Z≥60). Another particularly outstanding residual feature is a statistically significant downward trend from Ba to Yb, which seems slightly less pronounced in the comparison involving BD +17 3248. In solar system material, the lighter elements in question have dominant contributions from the s-process (Bisterzo14; Prantzos19), leading to the intriguing conclusion that – despite its moderately low metallicity – HD 20’s natal cloud might have been polluted with s-process material. In order to test this hypothesis, we mixed the pattern of BD +17 3248 as proxy for an integrated r-process pattern with main s-process yield models for thermally pulsing AGB stars with a standard 13C pocket from the FUll-Network Repository of Updated Isotopic Tables & Yields (F.R.U.I.T.Y. Cristallo11). An upper metallicity limit was placed at [Fe/H]=−1.6 (Z=0.0003), since it is infeasible for AGB polluters to have had higher [Fe/H] than HD 20 itself. We retrieved models for all remaining metallicities, stellar masses, and rotational velocities available through F.R.U.I.T.Y.. In addition, a set of newly computed models with initial rotational speeds of 30 and 60 km s−1 for stellar masses of 2 and 5M⊙ was included.
In the past, rotation has been considered as a potential process able to reproduce the observed spread in s-process elements at various metallicities (see Piersanti13, and references therein). However, depending on the adopted physical prescriptions, different results have been obtained (see, e.g., Langer99; Siess04; denHartogh19). Moreover, it has to be taken into account that recent asteroseismic measurements of low-mass stars in the Galactic disk (see, e.g., Mosser12) demonstrated that stars belonging to the red clump region are characterized by slowly rotating cores. This latter feature tends to exclude the possibility to have fast-rotating cores for low-mass AGB stars in the solar neighborhood (which is an essential condition in order to have sizeable effects lead by rotation-induced mixing). However, the same has not yet been confirmed for stars with larger masses (M>3M\sun) and/or at low metallicities ([Fe/H]<−1 dex).
The optimal mixture of integrated r- and main s-contributions to the overall neutron-capture budget of HD 20 was obtained by minimizing the expression
| | | | |
| --- | --- | --- | --- |
| | χ2=∑i(log(a⋅ϵr,i+b⋅ϵs,i)−logϵHD20,i)2/σ2i, | | (9) |
with a and b being the weight coefficients for the two r and s template patterns and the index i denoting those individual elements in the range 38≤Z≤90 with available entries for the HD 20 pattern, the BD +17 3248 pattern, and the AGB yield tables.
![Comparison of [hs/ls] and upper limit on [Pb/hs] for HD 20 against AGB ](https://media.arxiv-vanity.com/render-output/7950277/x19.png)
Figure 19: Comparison of [hs/ls] and upper limit on [Pb/hs] for HD 20 against AGB s-process models of different initial masses. The value determined for HD 20 is indicated by blue horizontal lines and error margins, while models of Z=0.0001 without rotation are shown in red. The adopted best-fit model with a rotation of 30 km s−1 is depicted in orange. For juxtaposition, we show i-process predictions for [hs/ls] from Hampel16 for four different neutron densities, ni, in the upper panel using black dotted, dash-dotted, dashed, and solid lines (see legend).
A decisive observational quantity for pinpointing the AGB model mass is the ratio [hs/ls] of mean abundances for the heavy-s (hs, represented by Ba, La, and Ce) and light-s (ls, represented by Sr, Y, and Zr) elements. The models predict supersolar [hs/ls] at low masses (≲3M⊙) with a decreasing trend with increasing model mass. Close-to solar ratios are found in the region between 3M\sun and 5M\sun. This behavior is demonstrated in Fig. [19](#S5.F19 "Figure 19 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), where we also indicate the solar [hs/ls] measured for HD 20 (0.00 dex151515Here we mention a ratio that was filtered for the r-process contribution (see later in this Sect.) as compared to the unfiltered value of 0.18 dex.). We conclude that main s-process contributions are likely to originate from high-mass (>3M⊙) AGB stars. This is bolstered by only being able to deduce an upper limit for HD 20’s Pb abundance – an element that is predicted to have strong contributions from models with masses <5M⊙ (e.g., Bisterzo12; Cristallo15). The large contribution to Pb comes from the radiative burning of the 13C(α,n)16O reaction (see Straniero95), which is the dominant source in low-mass AGB stars. On the other hand, in more massive AGBs major neutron bursts come from the 22Ne(α,n)25Mg reaction, which is efficiently activated at the bottom of the convective shells during thermal pulses. These episodes commonly lead to minor Pb production161616Telling the whole truth, also massive AGBs can produce large amounts of Pb, but this occurs at very low metallicities only (i.e., [Fe/H]<−2 dex).. At the same time, it is expected to find large Rb excesses from these massive AGB stars and their 22Ne(α,n)25Mg neutron source, manifesting in, for example, supersolar [Rb/Zr] (Garcia-Hernandez09; Perez-Mesa17). For HD 20, we found [Rb/Zr]<0.55 dex from the upper limit on the Rb abundance and after filtering our Zr finding from its dominant r-process contribution (see later in this Sect.). This upper limit is ∼0.2 dex higher than the largest predictions from our employed, massive (i.e., 4-5M\sun) AGB models. A robust measurement of Rb could be used to place further constraints on the exact initial mass of the polluting AGB star. In order to achieve this, spectra with an even higher S/N in the region around 7800 Å are required.

Figure 20: Estimated r- (right-hand scale) and s-fractions (left-hand scale) in HD 20 based on Eq. [10](#S5.E10 "(10) ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") with BD +17 3248 as proxy for an r pattern and the best-fit AGB model representing the s-enrichment site. Shown are only those elements that have a measured abundance in BD +17 3248.
By minimizing Eq. [9](#S5.E9 "(9) ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") we found the best-fit (χ2r=0.29) AGB model to be the one with 5M⊙, Z=0.0001 ([Fe/H]≈−2.15 dex), and a rotational velocity of 30 km s−1. Here, the model with non-zero angular momentum poses a slightly better fit than its non-rotating counterpart with all other parameters kept fixed (see also Fig. [19](#S5.F19 "Figure 19 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") top panel). The adopted mixture can successfully reproduce the entire neutron-capture pattern in HD 20. This includes the observations for the commonly employed tracers [hs/ls] and [Ba/Eu], as well as the downward trend from Ba to Yb that persists when assuming an r-only enrichment.
Using yields from the aforementioned main s-model and the BD +17 3248 pattern together with the best-fit model parameters for Eq. [9](#S5.E9 "(9) ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), we can estimate the fractional (integrated) r- and (main) s-process contributions to individual elements in HD 20 through
| | | | |
| --- | --- | --- | --- |
| | fr,i=a⋅ϵr,ia⋅ϵr,i+b⋅ϵs,i; fs,i=1−fr,i. | | (10) |
In order to properly account for fit uncertainties, we sampled the posterior distribution of the parameters a and b with emcee using the abundance errors. In Fig. [20](#S5.F20 "Figure 20 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") we show 800 individual realizations of the samples. From these, the fractions and asymmetric limits were estimated from the median, the 15.9th, and the 84.1th percentiles, respectively. These are listed in Table [5](#S5.T5 "Table 5 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).").
| Z | Element | fr | fs |
| --- | --- | --- | --- |
| | | [%] | [%] |
| 38 | Sr | 67.0+15.6−15.7 | 33.0+15.7−15.6 |
| 39 | Y | 40.2+21.0−14.4 | 59.8+14.4−21.0 |
| 40 | Zr | 56.2+18.9−16.3 | 43.8+16.3−18.9 |
| 42 | Mo | 75.5+12.4−14.0 | 24.5+14.0−12.4 |
| 44 | Ru | 90.9+5.0−7.1 | 9.1+7.1−5.0 |
| 45 | Rh | 95.8+2.3−3.6 | 4.2+3.6−2.3 |
| 46 | Pd | 77.1+11.6−13.5 | 22.9+13.5−11.6 |
| 47 | Ag | 90.0+5.5−7.6 | 10.0+7.6−5.5 |
| 56 | Ba | 58.4+18.3−16.3 | 41.6+16.3−18.3 |
| 57 | La | 80.2+10.3−12.5 | 19.8+12.5−10.3 |
| 58 | Ce | 79.8+10.4−12.6 | 20.2+12.6−10.4 |
| 59 | Pr | 92.0+4.4−6.4 | 8.0+6.4−4.4 |
| 60 | Nd | 89.7+5.7−7.9 | 10.3+7.9−5.7 |
| 62 | Sm | 93.4+3.7−5.4 | 6.6+5.4−3.7 |
| 63 | Eu | 98.7+0.7−1.2 | 1.3+1.2−0.7 |
| 64 | Gd | 96.1+2.2−3.4 | 3.9+3.4−2.2 |
| 65 | Tb | 98.1+1.1−1.7 | 1.9+1.7−1.1 |
| 66 | Dy | 97.8+1.3−2.0 | 2.2+2.0−1.3 |
| 67 | Ho | 98.5+0.9−1.4 | 1.5+1.4−0.9 |
| 68 | Er | 97.2+1.6−2.5 | 2.8+2.5−1.6 |
| 69 | Tm | 97.8+1.2−1.9 | 2.2+1.9−1.2 |
| 72 | Hf | 80.6+10.1−12.3 | 19.4+12.3−10.1 |
| 76 | Os | 98.3+1.0−1.5 | 1.7+1.5−1.0 |
| 77 | Ir | 99.3+0.4−0.6 | 0.7+0.6−0.4 |
| 90 | Th | 100.0+0.0−0.0 | 0.0+0.0−0.0 |
| | | | |
Table 5: Estimated fractional contributions from the r- and s-process for elements with Z≥38 in HD 20.
We find significant s-process fractions above 30% for the elements Sr, Y, Zr, and Ba, whereas only Y might have had a dominant (fs>50%) enrichment contribution from the s-process. This could be corroborated by measuring isotopic fractions for selected elements from spectra at very high resolution (e.g., Mashonkina19), though we note that HD 20 shows considerable intrinsic line broadening signatures (Sect. [3.4](#S3.SS4 "3.4 Line broadening ‣ 3 Stellar parameters ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) that may exceed the hyperfine splitting effect.
An important question to answer with respect to our proposed s-process imprint is whether the finding is caused by mixing in the ISM prior to the formation of HD 20, or as a result of surface pollution via mass transfer in a binary system (e.g., Gull18). The latter option was ruled out with high confidence in Sect. [2.2](#S2.SS2 "2.2 Radial velocities and binarity ‣ 2 Observations and data reduction ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), where we showed a lack of radial velocity variation. Therefore, a binary signal could only be hidden if the orbit would be seen almost perfectly face-on. Consequently, we strongly prefer the scenario where HD 20 had its chemical pattern composition mixed in the ISM.
###
5.5 i-process considerations
Another metal-poor star with signatures of simultaneous overabundances in both s- and r-process material is HD 94028. Among others, this star has been studied spectroscopically by Roederer12 and Roederer16, who complemented the abundance pattern from elements typically found in the optical with more exotic species (e.g., Ge, As, Se) that are only measurable in ultraviolet spectra gathered with the Hubble Space Telescope (HST). The authors concluded that several abundance ratios – most notably supersolar [As/Ge], [Mo/Fe], and [Ru/Fe] – are poorly described by combinations of s- and r-process patterns and therefore suggested an additional contribution by the i-process. However, more recently, Han18 indicated that both [As/Ge] and [Mo/Ru] may be well explained by weak r-nucleosynthesis without the need for an additional i-process. The i-process was also proposed by Koch19 as a candidate to reproduce their observed pattern for a metal-poor bulge star (labeled #10464, following the naming convention in Koch16). The authors find that either a mixture of an i-pattern with a main s-pattern or an i-process with two proton ingestion events reproduces their observations best.

Figure 21: Comparison of the residual HD 20 pattern (purple, same as lower panel of Fig. [18](#S5.F18 "Figure 18 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")) to the patterns of HD 94028 (red) and #10464 (green) after subtracting r+s and s contributions, respectively. The residual pattern for HD 94028 was determined following the procedure outlined in Sect. 5.2. in Roederer16, whereas a Z=0.0001, 2M\sun AGB model was assumed for the s-enrichment in #10464.
Based on Fig. [19](#S5.F19 "Figure 19 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), the residual [hs/ls] of HD 20 is seen to be well described by a 5M\sun rotating AGB star, while the i-process of intermediate neutron densities predicts much too high [hs/ls] ratios. In any case, we compare HD 20 to the two supposedly i-enriched stars to search for i-process indications in the patterns. By comparing to the filtered patterns of HD 94028−(r+s) and #10464−s (Fig. [21](#S5.F21 "Figure 21 ‣ 5.5 i-process considerations ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")), no clear i-process features stand out, and we cannot claim any i-process contribution in HD 20. However, some weak r-enrichment might have taken place. Until further i-process indications, such as elemental ratios [As/Ge] or strong pattern trends can robustly be associated with the i-process, it is hard to observationally investigate such contaminations. In order to test [As/Ge] we would need HST data.
###
5.6 Cosmochronological age
| ratio | logϵ(Th/r)a0 | logϵ(Th/r) | age |
| --- | --- | --- | --- |
| | [dex] | [dex] | [Gyr] |
| Th/Eu | −0.276 | −0.50±0.14 | 10.0±6.5 |
| Th/Hf | −0.063 | −0.62±0.14 | 26.0±6.5 |
| Th/Os | −1.009 | −1.25±0.14 | 11.3±6.5 |
| Th/Ir | −1.022 | −1.27±0.14 | 11.6±6.5 |
| Th/U | −0.192 | >0.36 | >7.8b |
171717
$a$$a$footnotetext: Production ratios from method “fit1” in Table 2 of Kratz07.
$b$$b$footnotetext: Calculated using Eq. (2) in Cayrel01.
Table 6: Age estimates from different radioactive chronometers.
Having measured a reliable abundance for the radioactive element Th enables an estimate of HD 20’s age from nuclear cosmochronology. The only isotope of Th with a lifetime that is relevant on cosmological timescales is 232Th (τ1/2=14.05 Gyr). The currently observed ratio logϵ(Th/r) of Th and other, stable r-elements can be related to a decay time using a theoretical initial production ratio, logϵ(Th/r)0, together with the age relation
| | | | |
| --- | --- | --- | --- |
| | | | (11) |
as outlined by Cayrel01. For Table [6](#S5.T6 "Table 6 ‣ 5.6 Cosmochronological age ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).") we considered the reference elements Eu and Hf as well as the third-peak elements Os and Ir. Moreover, we obtained a lower-limit age of 7.8 Gyr from our upper limit on the U abundance. Despite considerable ambiguities in theoretical production ratios (e.g., Schatz02; Cowan99), the dominant source of error for the inferred ages is the combined uncertainty of the abundances for each pair, which amounts to √2⋅0.1 dex=0.14 dex. The latter uncertainty linearly propagates into an age error of 6.5 Gyr (see also Ludwig10, for a detailed discussion of other error sources). According to Cayrel01, the observational and theoretical uncertainties are minimized by using Os and Ir as baseline for the chronometers, since they are closest to Th in atomic number. However, we note that both Os and Ir were determined from the neutral species while our Th abundance was deduced from the singly ionized state, which potentially introduces biases due to NLTE effects181818Furthermore, as demonstrated in Appendix [B.2](#A2.SS2 "B.2 Impacts of model atmosphere errors ‣ Appendix B Abundance systematics ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), among all relevant elements the two referred ones are most sensitive to uncertainties in the model temperature.. As indicated by Hansen18, NLTE effects on Th ii abundances may be alleviated by introducing a full, 3D NLTE treatment. Hence the obtained abundance would be close to our 1D LTE estimate.
The age of 26.0 Gyr from Th/Hf appears unreasonably high and we note that Roederer09 reported on a similar behavior for this chronometer. We thus suspect that the initial production rates are overestimated, which might be connected to a breakdown of the robustness of the heavy r-pattern in the region around Hf (M. Eichler priv. comm.). Removing our estimated high s-process contribution (19.4%) for Hf only slightly decreases the deduced age by about 4 Gyr. In any case, we exclude the corresponding age from consideration and calculate a mean age of 11.0±3.8 Gyr from the remaining three actual detections (10.0, 11.3, and 11.6 Gyr, thereby excluding the lower limit involving U).
6 Summary and Conclusions
--------------------------
We present a detailed investigation of the chemical composition of the metal-poor ([Fe/H]=−1.60 dex), r-process enhanced (r-I) Galactic halo star HD 20. Using newly obtained and archival very high signal-to-noise and high-resolution spectra in concert with extensive photometry and astrometry from the Gaia and TESS missions, we carefully investigate the key fundamental stellar parameters, which are independently confirmed by a number of alternative approaches. These allow for a high-precision spectroscopic chemical analysis, yielding abundances for 25 species of 20 elements with Z≤30, as well as for 29 species of 28 neutron-capture elements. Hence, we report on abundances for in total 48 elements, thereby adding 26 elements to the largest existing study of this star by Barklem05. Moreover, we deduce meaningful upper limits for Li, Rb, Pb, and U. This renders our presented abundance pattern one of the most complete available to date and therefore adds HD 20 to the short list of benchmark stars for nuclear astrophysics involving traces of only r+s processes.
Regarding the light elements up to Zn we find a behavior typical for the Galactic halo at comparable metallicities indicative of an enrichment history dominated by CCSNe prior to the onset of contributions by supernovae of type Ia. Using yield models, we could show that faint CCSNe of progenitor masses around ∼11.6M\sun and explosion energies ∼0.6⋅1051 erg can explain the light-element pattern in HD 20. While the heavy neutron-capture elements are found to closely follow the solar r-process distribution, strong deviations are found with respect to the first-peak elements, primarily due to depletions in Y and Ag. We attribute this observation to the additional primary (weak) r-process acting at low metallicity that was postulated based on observations of other metal-poor stars (e.g., Hansen12). This emphasizes that the solar-scaled r-pattern cannot pose as a universal proxy for the r-process, particularly in the lighter neutron-capture regime.
In comparing our observed neutron-capture abundances to the benchmark r-I star BD +17 3248 – which was chosen in order to lessen the gravity of systematic abundance errors – we find that several elements (Sr, Y, Zr, Ba, La) that are commonly associated with the s-process appear to be enhanced in HD 20 with respect to a pure r-process pattern. We obtain a considerably better fit of the overall distribution by introducing a dilution with material from main s-process yield predictions of a low-metallicity, massive, and rotating AGB star. Based on this model, we estimate a dominant s-process fraction for Y (59.8+14.4−21.0%), whereas several other elements may still have a significant contribution from this production channel (Table [5](#S5.T5 "Table 5 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A)."), Fig. [20](#S5.F20 "Figure 20 ‣ 5.4 Neutron-capture elements (Z ¿ 30) ‣ 5 Results and Discussion ‣ A high-precision abundance analysis of the nuclear benchmark star HD~201)This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 2)Based in part on data products from observations made with ESO Telescopes under program IDs 090.B-0605(A) (PI: Chanamé) and 60.A-9036(A).")). Given the here presented abundance pattern for HD 20, we prefer an r+s mixing scenario and refute i-process contributions until more robust abundance ratios or patterns will be put forward. Based on the lacking evidence of HD 20 being part of a binary system, we propose that the mixing happened in the ISM prior to the star’s formation as opposed to surface pollution due to mass transfer from a companion.
HD 20’s age is estimated at 11.0±3.8 Gyr based on nuclear cosmochronology from abundance ratios involving the radioactive element Th. We caution, however, that there are statistical and systematic error sources of both observational and theoretical nature that may bias this measure. Nonetheless, it appears safe to assume that the star is a representative of the old Galactic halo.
A future perspective for work on HD 20 is to complement our abundance pattern with UV spectra from HST. Deriving abundances from UV lines is extremely important in order to obtain more complete patterns. Key elements like As and Au carry important information on the neutron-capture environment and can only be assessed in HST data. Arsenic could contain crucial clues on the i-process, which we cannot explore in the ground-based, spectroscopically derived abundances, and Au is a good r-process indicator. An additional element that is more easily measured in the UV is Pb, which is an important s-process tracer for which we could only deduce an upper limit abundance in this study. Furthermore, understanding how and if the neutron-capture processes are formed and incorporated into later generations of stars is crucial to understand the need for an i-process versus efficient and fast mixing of r+s-process material in the ISM. Here, HD 20 offers promising insights into the neutron-capture processes as it is slightly enhanced and we detect clear traces of both r and s. It poses a powerful benchmark and it is far less polluted than the sun.
###### Acknowledgements.
M.H., H.-G.L., and E.K.G. gratefully acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 138713538 – SFB 881 (“The Milky Way System”, subprojects A03, A04, and A08). C.J.H. acknowledges support from the Max Planck Society and the ‘ChETEC’ COST Action (CA16117), supported by COST (European Cooperation in Science and Technology). The authors are grateful to M. Catelan, A. Koch, Z. Prudil, and A. Gallagher for fruitful discussions. Furthermore, the help by M. Kovalev and M. Bergemann in setting up a grid of NLTE corrections for Fe lines is highly appreciated. A. Heger and C. Chan are acknowledged for their detailed support for the application of StarFit. We thank T. Nordlander and K. Lind for providing access to their grid of NLTE corrections for Al. We value the swift and constructive report issued by the anonymous referee. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. This paper presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is <https://www.cosmos.esa.int/gaia>. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program. This research made use of Lightkurve, a Python package for Kepler and TESS data analysis (lightkurve).
|
86002be6-d9d2-4cad-a9d0-c6ccf36a7b9d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Can you enter the Matrix? The deliberate simulation of sensory input.
Related to: Generalizing From One Example
I am able to simulate sensory input, i.e. dream deliberately, enter my personal Matrix (Holodeck). I can see, hear, feel and smell without the presence of light, sound, tactile or olfactory sensory input. That is, I do not need to undergo certain conditions to consciously experience them. They do not have to happen live, I can imagine them, simulate them. I can replay previous and create new sensory experiences in my mind, i.e. perceive them with my minds eye. I can live, pursue and experience activities inside my head without any environmental circumstances, i.e. all I need is my body. I can walk through a park, see and hear children playing, feel and smell the air, while being weightlessness in a totally dark and quiet zero gravity environment.
After reading the article by Yvain some time ago the idea(?) that some people are unable to deliberately experience the world with their mind's eye hasn't ceased to fascinate me. So yesterday I came back to search the comments on that article if there are people who actually confirm this claim. The comments by Garth and Blueberry seem to suggest this. After that I started to ask other people and was amazed that after some misunderstanding the first two people I asked were both either completely or almost unable to experience anything if it wasn't happening live. This is shocking. The second person was actually my dad. I asked, "if you were to close your eyes and I told you that I changed the lighting in the kitchen from normal to red, could you imagine how it would look like"? "Well, yes" he said. Only after about 10 minutes I figured what he meant is that he would be able to describe it, paint it or pinpoint other characteristics of his kitchen equipped with all red lighting, but he couldn't actually dream it! He couldn't see it if he closed his eyes! He always thought that when people say they imagine a beautiful sunset they actually mean that they could describe it or picture
|
97fb58b6-6697-4206-8aef-e17e05567998
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Four reasons I find AI safety emotionally compelling
‘I don’t feel emotionally motivated to work on AI safety, even though I’m intellectually convinced that it’s important.’
It always surprises me when people say this because I find my work at [Nonlinear](https://www.nonlinear.org/) on AI safety *incredibly* motivating. I’m sharing my reasons in the hope that they’ll resonate with some of you, and that these ideas will help bring your emotional drives into greater harmony with your abstract convictions.
1. The dramatic scale of AGI is inspiring
=========================================
When I was a kid, I wanted to save the world. Like many EAs, I was obsessed with stories of superheroes who could use their powers to save whole cities from catastrophe. I aspired to be like Gandhi, or Martin Luther King, and to do something really big and important; something that would forge widespread, lasting change. But as I got older, these dreams collided violently with the harsh reality: **changing things is *really really hard*.** I became more realistic and scaled down my ambitions.
I started off working on global health. In this area, the most I could hope for was (for example) to reduce infant mortality by a percent or two in a single country - and that’s if my career was an astonishing success. This is significant - that’s a lot of lives saved! - but it’s far more modest than my childhood ambitions. I resigned myself to the fact that while I could make significant improvements to the world, the problems we face would remain immense and oppressive.
Similarly, at one time, I felt most motivated to work on animal welfare. But again: even if I was unusually successful in my work, I could only hope to make a small dent in the problem. I might (for example) secure slightly better conditions for factory-farmed chickens in the US, but there would still *be* factory farms. And even if factory farms were abolished, what about the suffering of animals in the wild?
This situation was demotivating over time: the thought of so much intense pain was emotionally crushing, and I knew that even if I worked extremely hard and improved some animals’ lives, those improvements would be dwarfed by the suffering that remained.
With AGI, on the other hand, my ambitions to *really change things* have returned. **AI is likely to change the world *massively:*** for better or for worse. When we talk about AI, we often focus on its dangers: the fact that misaligned AI could kill us all or cause astronomical suffering. But there is a more positive side: if we manage to align AGI - if we get it *right* - it could fix *so* much. A superintelligent AI aligned with human values could end poverty, war, oppression, abuse, suffering itself - you name it. It could improve the world far more than any other intervention or advance to date.
**This could be the [most](https://www.cold-takes.com/most-important-century/)** **[important](https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1#Might_today_be_merely_an_enormously_influential_time_) cause in all of history.** This time, if enough people work hard, if we’re thoughtful and strategic, we could really achieve utopian improvements to the human condition. I find this vision incredibly inspiring.
2. AI safety means you don’t have to “choose” a cause
=====================================================
Because aligned AI could have such wide-reaching effects, I find it easier to wholeheartedly dedicate myself to AI alignment. I used to feel terrible about all the problems I *wasn’t* helping with. As EAs, we know that we are [always in triage](https://mhollyelmoreblog.wordpress.com/2016/08/26/we-are-in-triage-every-second-of-every-day/) and need to prioritize ruthlessly, but it can still feel painful, even if we know it’s the right thing to do.
I was working on poverty and animal welfare, but then I’d think - what about domestic abuse? What about the lobsters you see in tanks? What about depression? What about human rights abuses? What about North Korea? Or just plain old cancer, heart disease, death?
In the past, when I thought about one of these problems, I had to regretfully set it aside, and tell myself ‘yes, this is bad; but what you’re working on is *more* impactful. Stay focused!’
**Now, though, whenever I remember that billions are still poor, or wild animals are being torn apart by predators, or that so many people still live under oppression and tyranny, I think to myself ‘AI could fix this’.** Before, other problems were distracting; now, they only add fuel to my motivation to help AI go right. An aligned superintelligence could solve every problem that (for now) we neglect in order to work on aligning superintelligence; so choosing to work on AI safety seems like less of an agonizing trade-off.
3. It’s globally *and* locally helpful
======================================
If you work on AI risk, **you don’t need to make trade-offs between your self-interest and your altruistic desires,** because if AGI is developed in your lifetime, it will benefit or harm *everyone*.
For most EA cause areas, you need to be motivated by pure altruism; when you work on the cause, you are not working towards your personal goals. Working on animal welfare or global development benefits others, but it doesn’t benefit *us*. But **when I work on AI safety, this could directly benefit me and my loved ones,** as well as countless others. If a misaligned superintelligence kills us all within my lifetime, then it won’t just harm strangers living on the other side of the world, or unknown future people - it will harm *me* and everyone I care about!
Similarly, if we create an aligned superintelligence, it could solve problems that directly affect me: sickness, sadness, death. **AI could make us immortal!** This would be pretty motivating by itself, even if I didn’t have any purely altruistic motivation.
4. It’s awe-inspiring to try to build something so good
=======================================================
There is something deeply awe-inspiring about building something that is so much smarter, stronger, and *better* than us. We talk a lot about how much AI could surpass us in skill or intelligence; but (if we succeed), **our superintelligence will also surpass us in *goodness*.** It could embody all of our human virtues without our human vices: our partiality, our irrationality, the biases baked into us by evolution. It could be genuinely impartial and benevolent in a way that humans can’t.
I want something that good to exist.
And this and so many other reasons are why I find AI safety to be one of the most compelling cause areas I’ve ever worked in.
Let me know other reasons you find AI x/s-risks emotionally compelling in the comments. It would be great to compile the best ones and have a piece you can re-read or point people towards when they need a boost.
***Reminder that* *[you can listen to this post on your podcast player](https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library)* *once*/*if it reaches 25 upvotes using* *[the Nonlinear Library](https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library).***
*This post was written collaboratively by Kat Woods and Amber Dawn Ace as part of Nonlinear’s experimental Writing Internship program. The ideas are Kat’s; Kat explained them to Amber, and Amber wrote them up. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise.*
***If you would be interested in working with Amber to write up your ideas, fill out [this form](https://forms.gle/g1dXj5CDtEt74fWx9)**.*
|
1c9408d9-c9c6-47e7-91cd-36ad91635c00
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
2011-2012 Winter Fundraiser Completed
Thanks to our dedicated supporters, we met our goal for our 2011-2012 Winter Fundraiser. Thank you!
The fundraiser ran for 56 days, from December 27, 2011 to February 20, 2012.
We exceeded our $100K goal, raising a total of $143,048.84 from 101 individual donors.
Every donation that the Machine Intelligence Research Institute receives is powerful support for our [mission](https://intelligence.org/files/strategicplan2011.pdf) — ensuring that the creation of smarter-than-human intelligence ([superintelligence](http://www.nickbostrom.com/ethics/ai.html)) benefits human society. We welcome donors [contacting us](mailto:[email protected]) to learn more about our pursuit of this mission and our continued expansion.
Keep your eye on this blog for regular progress reports from our executive director.
The post [2011-2012 Winter Fundraiser Completed](https://intelligence.org/2012/02/20/2011-2012-winter-fundraiser-completed/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
|
608a8ed4-33c2-45ff-b950-becaa063baea
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Incentives from a causal perspective
*Post 4 of*[*Towards Causal Foundations of Safe AGI*](https://www.alignmentforum.org/s/pcdHisDEGLbxrbSHD)*, preceded by*[*Post 1: Introduction*](https://www.alignmentforum.org/posts/3oQCY4he4zRzF6vQb/introduction-to-towards-causal-foundations-of-safe-agi)*,* [*Post 2: Causality*](https://www.alignmentforum.org/posts/9ag5JGBnMsayBidwh/causality-a-brief-introduction)*, and* [*Post 3: Agency*](https://www.alignmentforum.org/s/pcdHisDEGLbxrbSHD/p/Qi77Tu3ehdacAbBBe)*.*
*By Tom Everitt, James Fox, Ryan Carey, Matt MacDermott, Sebastian Benthall, and Jon Richens, representing the*[*Causal Incentives Working Group*](https://causalincentives.com/)*. Thanks also to Toby Shevlane and Aliya Ahmad.*
*“Show me the incentive, and I’ll show you the outcome” – Charlie Munger*
Predicting behaviour is an important question when designing and deploying agentic AI systems. Incentives capture some key forces that shape agent behaviour,[[1]](#fnvt2gxcuvqa) which don’t require us to fully understand the internal workings of a system.
This post shows how a causal model of an agent and its environment can reveal what the agent wants to know and what it wants to control, as well as how it will respond to commands and influence its environment. A [complementary result](https://arxiv.org/abs/1910.10362) shows that some incentives can only be inferred from a causal model, so a causal model of the agent’s environment is strictly necessary for a full incentive analysis.
Value of information
--------------------
What information would an agent like to learn? Consider, for example, Mr Jones deciding whether to water his lawn, based on the weather report, and whether the newspaper arrived in the morning. Knowing the weather means that he can water more when it will be sunny than when it will be raining, which saves water and improves the greenness of the grass. The weather forecast therefore has [information](https://arxiv.org/abs/2102.01685) [value](https://ieeexplore.ieee.org/document/4082064) for the sprinkler decision, and so does the weather itself, but the newspaper arrival does not.
We can quantify how useful observing the weather is for Mr Jones, by comparing his expected utility in a world in which he does observe the weather, to a world in which he doesn’t. (This measure only makes sense if we can assume that Mr Jones adapts appropriately to the different worlds, i.e. he needs to be [agentic](https://www.alignmentforum.org/s/pcdHisDEGLbxrbSHD/p/Qi77Tu3ehdacAbBBe) in this sense.)
The causal structure of the environment reveals which variables provide useful information. In particular, the [d-separation criterion](http://www.dagitty.net/learn/dsep/index.html#:~:text=d%2Dseparation%20is%20a%20criterion,ness%22%20or%20%22separation%22.) captures whether information can flow between variables in a causal graph when a subset of variables are observed. In single-decision graphs, value of information is possible when there is an information-carrying path from a variable to the agent’s utility node, when conditioning on the decision node and its parents (i.e. the “observed” nodes).
For example, in the above graph, there is an information-carrying path from forecast to weather to grass greenness, when conditioning on the sprinkler, forecast and newspaper. This means that the forecast can (and likely will) provide useful information about optimal watering. In contrast, there is no such path from the newspaper arrival. In that case, we call the information link from the newspaper to the sprinkler [nonrequisite](https://pubsonline.informs.org/doi/abs/10.1287/mnsc.47.9.1235.9779).
Understanding what information an agent wants to obtain is useful for several reasons. First, in e.g. fairness settings, the question of [**why**](https://arxiv.org/abs/2202.10816) [a decision](https://arxiv.org/abs/2306.00636) was made is often as important as **what** the decision was. Did gender determine a hiring decision? Value of information can [help us](https://arxiv.org/abs/2202.10816)[understand](https://arxiv.org/abs/2306.00636) what information the system is trying to glean from its available observations (though a formal understanding of [proxies](https://dl.acm.org/doi/abs/10.1145/3531146.3533242) remains an important open question).
More philosophically, some researchers consider an agent’s [cognitive boundary](https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02688/full) as the events that the agent cares to measure and influence. Events that lack value of information must fall outside the measuring part of this boundary.
Response Incentives
-------------------
Related to the value of information are **response incentives**: what changes in the environment would a decision chosen by an optimal policy respond to? Changes are operationalised as [post-policy](https://www.alignmentforum.org/posts/9ag5JGBnMsayBidwh/causality-a-brief-introduction#One_agent) interventions, i.e. as interventions that the agent cannot change its policy in response to (the decision might still be influenced under a fixed policy).
For example, Mr Jones is incentivised to adopt a policy that waters or not based on the weather forecast. This means that his decision will be responding to interventions both on the weather forecast, and to the weather itself (assuming the forecast reports those changes). But his watering decision will not respond to changes to the newspaper as it's a nonrequisite observation. He is also unable to respond to changes that are not causal ancestors of his decision, such as the groundwater level or the (future) greenness of the grass:

Response incentives are important because we want agents to respond to our commands in appropriate ways, such as [switching](https://intelligence.org/files/Corrigibility.pdf) [off](https://arxiv.org/abs/1611.08219) [when](https://arxiv.org/abs/2305.19861) asked. Conversely, in [fairness](https://arxiv.org/abs/1703.06856), we often want a decision to **not** respond to certain things, e.g. we don’t want a person’s gender to influence a hiring decision, at least not along [particular](https://arxiv.org/abs/1907.06430) [paths](https://arxiv.org/abs/2202.10816). For example, if an AI system is used to filter candidates for interview, what if gender only indirectly influences the prediction, via the university degree the person acquired?

A limitation of graphical analysis is that it can only make a binary distinction whether an agent is incentivised to respond or not *at all*. Further work may develop a more fine-grained analysis of when an agent will respond **appropriately**, which could be thought of as causal [mechanism design](https://en.wikipedia.org/wiki/Mechanism_design).
Value of Control
----------------
The dual of information is control. While information can flow in both directions over a causal relationship (the ground being wet is evidence of rain, and vice versa), influence can only flow forward over causal arrows. This makes it particularly easy to infer Value of Control from a causal graph, by simply checking for directed paths to the agent’s utility node.

For example, there is a directed path from weather to grass greenness, so Mr Jones may value controlling the weather. He might also value controlling the weather forecast, in the sense that he wants to make it more accurate. And trivially, he wants to control the grass itself. But controlling the newspaper lacks value, because the only directed path from the newspaper to the grass contains a nonrequisite information link.
Value of control is important from a safety perspective, as it reveals what variables the agent would like to influence if it could (i.e. it bounds the control-part of the agent’s [cognitive boundary](https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02688/full)).
Instrumental Control Incentives
-------------------------------
Instrumental control incentives are a refinement of value of control to nodes that the agent is both able and willing to control. For example, even though Mr Jones would like to control the weather, he is unable to, because his decision does not influence the weather (as there is no directed path from his decision to the weather):
There is a simple graphical criteria for an instrumental control incentive: to have it, a variable must sit on, or at the end of, a directed path from the agent’s decision to its utility (the grass sits at the end of the path sprinkler -> grass).
However, it is less obvious how to characterise instrumental control incentives in terms of behaviour. How do we know that the agent wants to control a variable that it is already able to influence? Simply giving the agent full control of the variable would not work, as that would bring us back to value of control.
In our [agent incentives paper](https://arxiv.org/abs/2102.01685), we operationalise it by considering a hypothetical environment, where the agent has two copies of its decision: one that only influences the environment via a variable V, and one that influences the environment in all other ways. If the first decision-copy influences the agent’s utility, then V has an instrumental control incentive. This makes sense, because only if the decision influences V, and V in turn influences utility, can the first decision-copy influence the agent’s utility. [Halpern and Kleimann-Weiner](http://www.mit.edu/~maxkw/pdfs/halpern2018towards.pdf) consider a related hypothetical: what if the variable wasn’t influenced by the agent’s decision? Would the agent then take a different action? This leads to [the same](https://causalincentives.com/pdfs/deception-ward-2023.pdf) graphical condition.
Instrumental control incentives have been used to analyse [reward tampering](https://arxiv.org/abs/1908.04734) and [user](https://arxiv.org/abs/2109.04083) [manipulation](https://arxiv.org/abs/2303.09387), leading to [path-specific objectives](https://arxiv.org/abs/2204.10018) as a suggested method for ethical content recommenders [see next post]. Other methods that remove instrumental control incentives include [decoupled approval](https://arxiv.org/abs/2011.08827), [current-RF optimisation](https://arxiv.org/abs/1908.04734), [counterfactual](https://arxiv.org/abs/1711.05541) [oracles](https://arxiv.org/abs/1906.08663), [countermeasures for auto-induced distributional shift](https://arxiv.org/abs/2009.09153), and [ignoring effects through some channel](https://www.lesswrong.com/posts/5bd75cc58225bf067037513c/maximizing-a-quantity-while-ignoring-effect-through-some-channel).
A question for future work is how to quantify an agent’s degree of influence, as discussed in the [agency post](https://www.alignmentforum.org/posts/Qi77Tu3ehdacAbBBe/agency-from-a-causal-perspective).
Multi-decision and multi-agent extensions
-----------------------------------------
Agents often interact over multiple timesteps with an environment that contains other agents as well. Sometimes, the single-decision, single-agent analysis extends to these settings, in one of two ways:
1. Assume all but one decision are following fixed, non-adaptive policies, or
2. Consider a **multi-decision policy** as a single decision, that simultaneously decides decision rules for all concrete decisions.
Both options have drawbacks. Option 2 only works in single-agent situations, and even in those situations some nuance is lost, as we can no longer say which decision an incentive is associated with.
Option 1 isn’t always an appropriate modelling choice, as policies [do adapt](https://www.alignmentforum.org/s/pcdHisDEGLbxrbSHD/p/Qi77Tu3ehdacAbBBe). Except for response incentives, the incentives we’ve discussed above are all defined in terms of hypothetical changes to the environment, such as adding or removing an observation (value of information), or improving the agent’s control (value of control, instrumental control incentives). Why would policies remain fixed under such changes?
For example, if an adversary knows that I have access to more information, they might behave more cautiously. Indeed, more information can sometimes [decrease expected utility](https://www.cmu.edu/dietrich/philosophy/docs/seidenfeld/On%20the%20value%20of%20information%20in%20games.pdf) in multi-agent situations. Multi-agent dynamics can also make agents behave as if they have an instrumental control incentive on a variable, even though they don’t satisfy the single-agent criterion. For example, the actor in an actor-critic architecture behaves (chooses actions) as if it tries to control the state and get more reward, even though it doesn’t satisfy the definition of a single-decision, single-agent instrumental control incentive:
*Actor chooses action (A) the critic a score for each action (Q). The action influences the state (S) and the reward (R). The actor wants good critique (Q(A)), and the critique wants to predict actual reward (=).*For these reasons, we’ve been working to extend the incentive analysis to multiple decisions. We’ve established a [complete graphical criterion](https://arxiv.org/abs/2202.11629) for the value of information of chance nodes in single-agent, multi-decision influence diagrams with [sufficient recall](https://dspace.mit.edu/bitstream/handle/1721.1/41530/MIT-CSAIL-TR-2008-029.pdf?sequence=1), and a [way to model](https://cgi.cse.unsw.edu.au/~eptcs/paper.cgi?TARK2023.17.pdf) forgetfulness and absent-mindedness. Further work may push further in these directions.
In the [discovering agents paper](https://arxiv.org/abs/2208.08345), we also suggest a condition for when the single-decision criterion can be used: it’s when no other mechanisms adapt to the relevant intervention.
Conclusions
-----------
In this post, we have shown how causal models can both make precise various types of incentives, and how incentives can be inferred from a causal graph, and argued that it is impossible to infer most types of incentives without a causal model of the world. Natural directions for further research include:
1. Extend the result by [Miller et al](https://arxiv.org/abs/1910.10362) to other types of incentives, establishing for which incentives a causal model is strictly necessary.
2. When a system is incentivized to use an observation as a [proxy](https://dl.acm.org/doi/abs/10.1145/3531146.3533242) for another variable? Value of information and response incentives give clues, but [further](https://arxiv.org/abs/2202.10816) [work](https://arxiv.org/abs/2306.00636) is needed to fully understand the conditions.
3. Develop causal [mechanism design](https://en.wikipedia.org/wiki/Mechanism_design) for understanding how to incentivise agents to respond in **appropriate** ways, and understand their **degree** of [influence](https://www.alignmentforum.org/posts/Qi77Tu3ehdacAbBBe/agency-from-a-causal-perspective).
4. Continue [the extensions](https://arxiv.org/abs/2202.11629) to multi-decision and multi-agent extension of incentive analysis, with generalised definitions and graphical criteria that work in graphs with multiple decisions and agents.
In the next post, we’ll apply the incentive analysis to various misspecification problems and solutions, such as manipulation, recursion, interpretability, impact measures, and path-specific objectives.
1. **[^](#fnrefvt2gxcuvqa)**Some others being computational constraints, choices of learning algorithm, and environment interface.
|
38f1d7a7-5b63-4bd9-8179-7d9ad0f1c3c6
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
David Wolpert on Knowledge
*Financial status: This is independent research, now supported by a grant. I welcome* [*financial support*](https://www.alexflint.io/donate.html)*.*
*Epistemic status: I’m 85% sure that this post faithfully relates the content of the paper that it reviews.*
---
David Wolpert has written a paper entitled [constraints on physical reality arising from a formalization of knowledge](https://arxiv.org/pdf/1711.03499.pdf). The paper is interesting to me because it provides a formalization of knowledge that presupposes no agent boundaries, and makes almost no assumptions about the representation of knowledge or the structure of the entities using it, yet is strong enough to derive some quite interesting impossibility results. I think his approach is worth understanding because it is, to my eyes, both novel and helpful, although I don’t expect the specific definitions he gives to provide any "off the shelf" resolution to questions in AI alignment.
Wolpert begins with the notion of a universe, and then supposes that if there is an entity anywhere within the universe that can infer facts about that universe, then there ought to be some universes where that entity entertains questions about some property of the universe in which it is embedded, and in those universes, such an entity ought to later be configured in a way that corresponds to the truth about the question that was entertained.
It is not required that we can look at a particular universe and identify which specific question is being entertained at which time by which entity, nor that we can identify boundaries between "entities" at all. All that is required is that if something resembling inference is happening somewhere within the universe, then there exists a function from universes to questions, a function from universes to answers, and a function from universes to the actual value of some property such that a particular relationship holds between these three functions.
Wolpert’s contention is that his formalization captures what it means for inference to be happening. He does not make this contention explicit, though the contention is certainly required to make his framework interesting, since the conclusions he wishes to draw are things concerning what can or cannot be known by intelligent entities within a universe. If the underlying contention is false, and his mathematical structure does not capture what it means for inference to be happening, then all his theorems still go through, but the conclusions are just statements of mathematics with nothing really to *say* about the nature of inference, knowledge, or intelligence.
I will now attempt to explain Wolpert’s framework through an example. I will begin with a specific instantiation of his framework, and then describe the general case.
Example: robot vacuum
---------------------
Let us consider universes in which there is a single room of some particular shape and size, and a robot vacuum that starts in some particular position and orientation. Here are three possible floorplans and robot poses:

Let us assume that the robot vacuum is programmed with some deterministic movement algorithm designed to clean the floor, and that in each universe the robot moves about for ten minutes and then that is the end of that universe. When I refer to "a universe" I am referring not to a single time-slice, but to an entire ten minute universe history, and that includes the evolving internal state of the robot’s computer. For the remainder of this section, the set U.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
will refer to all possible universes with this particular structure (floorplan plus robot vacuum evolving for exactly ten minutes).
Here are three trajectories:

Now the robot vacuum is equipped with a laser emitter and receiver that it uses to measure the distance to the nearest obstacle in front of it, and its movement algorithm uses these measurements to decide when to make turns. Intuitively, we would say that the robot is able to infer the distance to the nearest obstacle, but making such a notion precise has [proven](https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/YLoXcquNkNdsteZYd) [surprisingly](https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/QLosiQsPJepZWtXG4) [difficult](https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/fcnFddKjKZdDXt5cp) in past attempts. We are going to set things up according to Wolpert’s formalization of physical inference and compare the results to common sense.
Let’s suppose that within the robot vacuum’s code, there is a function that is used to query whether the distance to the nearest obstacle is within four possible ranges, say 0-1 inches, 1-3 inches, 3-10 inches, or more than 10 inches. Let’s suppose that this function takes one input indicating which of the four distance ranges is to be queried, and produces the output YES if the robot’s sensors indicate that the distance to the nearest obstacle is within that range, or NO otherwise. Let’s suppose that the function works by switching on the laser emitter, then reading off a measurement from the laser receiver, then doing some calculations to determine whether that measurement is consistent with the queried distance range. I will call this sequence of events a "distance query". Let’s suppose that such distance queries are performed reasonably frequently as part of the robot vacuum’s ordinary movement algorithm. This is all happening inside the ordinary evolution of a universe.
Now we are going to step outside of the universe and construct some mathematical objects with which to analyze such universes according to Wolpert’s definition of inference.
First, define a question function X over the set U of universes as follows. Given a universe u, find the last time before the end of the universe that a distance was queried in that universe, and let the output of the function be the query that was run. If no distance was ever queried in that universe then let the output be a special null value. So this is a function that maps universes to one of five possible values corresponding to the four possible distance queries plus a special value if no distance was ever queried.
Next, define an answer function Y over the set U of universes as follows. Given a universe u, find the last time before the end of the universe that a distance was queried in that universe, and let the output be 1 if the result of that query was YES, or -1 otherwise. If no distance was ever queried then let the output be -1. This leaves an ambiguity between an answer of NO and the case where no distance was ever queried, but this is unimportant.
Finally, define a reference function Γ over the set U of universes as follows. Given a universe u, find the last time before the end of the universe that a distance was queried in that universe, and let the output be the 1 if the distance between the robot and the nearest obstacle at that time was between 0 and 1 inches, 2 if the distance between the robot and the nearest obstacle at that time was between 1 and 3 inches, 3 if the distance between the robot and the nearest obstacle at that time was 3 and 10 inches, and 4 otherwise.
So in this example X and Y are functions over universes that examine the robot's internal state, and Γ is a function over universes that examines the actual distance between the robot and the nearest obstacle. In general X, Y, and Γ can be any function whatsoever over universes, although Y must have image {1,−1}. There is no need for there to be an obvious agent like there is in this example. The important part is what it means for an inference device to infer a function.
An inference device is a pair of functions (X,Y) over universes, and Wolpert says that such a device weakly infers a particular function Γ over universes, if for each possible value that function can take on (there are four in our example), there is a particular question that can be asked such that for all universes where that question is the one that is asked, the answer given is 1 if the function under consideration has the particular value under consider in the universe under consideration, or -1 otherwise. More formally:
>
> An **inference device** over a set U is a pair of functions (X,Y), both with domain U, and with Y surjective onto {−1,1}.
>
>
>
>
> Let Γ be a function over U. An inference device (X,Y) **weakly infers** Γ iff for all γ in the image of Γ, there is some x in the image of X such that for all universes u in U that have X(u)=x, Y(u) is 1 if Γ(u)=γ, or -1 otherwise.
>
>
>
There is no structure at all to the universes in this definition, not even a configuration that evolves over time. The only structure is given by the functions X, Y, and Γ, which partition the universes according to which question was asked, which answer was given, and the true value of a certain property. The definition of weak inference is a relationship among these partitions.
Note that it is okay for there to be universes in which no question at all is asked.
Example: human inferring weather
--------------------------------
The motivating example that Wolpert gives in the paper consists of a human who is inferring the weather at a particular place and time. At a certain time t1, Jim is thinking about the question "will there be a cloud in the sky over London tomorrow at noon?", then at some later time before noon t2, Jim is thinking either "yes" or "no", then at time t3=noon there is either a cloud in the sky over London or not. We set up the functions X, Y, and Γ to pick out the relevant quantities at the relevant times, and then we apply the definition of weak inference to understand what it means for Jim to infer the weather over London at noon.
This example is a particular form of inference that we might call prediction, because t1<t2<t3. The definition of weak inference doesn’t require a universe to have any time-based structure at all, much less for questions, answers, and reference measurements to be given in a certain time order, so we could also consider examples in which t1<t3<t2, which we might call observation because the question is posed first, then the event transpires, then an answer is given. We could also consider t2<t1<t3, which we might call remembering because the event transpires first, then the question is posed, then the answer is given. Wolpert’s framework gives a single definition of what it means to predict correctly, observe correctly, and remember correctly.
Of course there might be many universes in which there is no "Jim", or where Jim is not entertaining any such question at t1, or is not thinking "yes" or "no" at time t2. This is fine. All that weak inference requires is that for each possible value of the reference function there is a question such that for all universes where that question *is* the one picked out by X, the answer picked out by Y is "yes" if the reference function does in fact evaluate to the value under consideration in universe under consideration, or "no" otherwise.
There might be universes in which a question is entertained at t1 but no answer is entertained at t2. In this case we can simply define X to only pick out a question in universes in which an answer is also given by Y, and similarly for Y to only pick out an answer in universes where a question is given by X. We might ask whether this flexibility in choice of X and Y might allow us to "cheat" and say that inference is happening when it is not. I will discuss this below.
Example: language models
------------------------
We might consider a language model such as GPT-3 as an inference device. The question space would be the set of all possible prompts that can be given to GPT-3, and we might take the answer to be "yes" if the first word in the response is "yes", and "no" otherwise. We could then ask which properties of the world GPT-3 is capable of inferring. This wouldn’t require that GPT-3 answer all possible questions correctly, only that there is *some* question that it always answers correctly with respect to each possible value of a reference function.
In order to apply Wolpert’s notion of weak inference in this way, we need to imagine some range of possible universes, each of which has a separate GPT-3 trained within it. Since the universes differ in certain ways, the training data supplied to GPT-3 in each universe will be different, so the model may give different answers to the same question in different universes. We cannot apply Wolpert’s notion of weak inference within a single universe.
This makes sense because the whole *reason* that we are impressed by language models such as GPT-3 is that it *seems* that the training method is somewhat universe-agnostic. When given a prompt such as "flying time from London to Paris is", GPT-3 gives an answer that (we assume) would have been different if the flying time from London to Paris was in fact different to what it is in our universe.
Impossibility results
---------------------
We have so far considered a single inference device (X,Y) and a single reference function Γ. It is also possible that a single inference device might weakly infer multiple reference functions. This would mean that the question space is sufficiently broad that questions can be entertained about each possible value of many reference functions. For example, we might consider reference functions corresponding to various possible temperatures in London at noon, and various possible wind speeds in London at noon. In any one universe, only one question will ever be picked out by X, and that question might concern either the temperature or the wind speed. The answer picked out by Y must be either yes or no in order for (X,Y) to satisfy the definition of an inference device, so only one answer is ever given by a particular inference device, which means that questions cannot ask for answers to multiple sub-questions.
We might ask whether it is possible to have an omniscient inference device to which questions about any reference function whatsoever can be posed, and which will always give the correct answer. Wolpert gives a simple proof that no such inference device can exist. The proof simply notes that you could take the inference device’s own answer function, which can take on two values ("yes" or "no"), and ask whether the output of that function is "no". In this case we are asking a question like "will you answer no to this question?" A correct answer cannot be given.
Wolpert gives a second impossibility result concerning two inference devices. He shows that it is not possible for there to be two inference devices that mutually infer each others’ output functions. The formal statement of this requires a notion of distinguishability between inference devices:
>
> Two devices (X1,Y1) and (X2,Y2) are **distinguishable** if for each pair of questions x1 in the image of X1 and x2 in the image of X2, there is a universe u in which X1(u)=x1 and X2(u)=x2.
>
>
>
Given this definition, Wolpert proves that no two distinguishable inference devices can mutually infer each others’ answer function. The proof goes as follows. Suppose we have two inference devices named "Alice" and "Bob". Let us take their answer functions and divide all possible universes into four groups according to the answers Alice and Bob give:

Now we wish to know whether it is possible for any such pair of inference devices to weakly infer each other’s answer functions, so we are going to ask questions to Bob and Alice concerning the answer given by their partner, and we are allowed to ask about either of the two possible answers ("yes" or "no"). Suppose that we ask Alice the question "is Bob’s answer ‘no’?" and we ask Bob the question "is Alice’s answer ‘yes’?’ Now there are only four pairs of answers that any pair of inference devices can give, and these correspond to the regions labelled 1 through 4 in the figure above:
* In region 1, Alice’s answer is "yes" and Bob’s answer is "no", in which case Bob is incorrect.
* In region 2, Alice’s answer is "yes" and Bob’s answer is "yes", in which case Alice is incorrect.
* In region 3, Alice’s answer is "no" and Bob’s answer is "yes", in which case both are incorrect.
* In region 4, Alice’s answer is "no" and Bob’s answer is "no", in which case both are incorrect.
Hence no two inference devices can mutually infer one another’s answer function.
Strong inference, stochastic inference, and physical knowledge
--------------------------------------------------------------
The ideas that I have discussed so far occupy just the first quarter of the paper. Wolpert goes on to define strong inference, stochastic inference, and something that he calls physical knowledge. Strong inference concerns inference of input/output maps, which differs from the way that weak inference is about inferring one value at a time. A device that strongly infers another device can predict the output of the other device for any input. Stochastic inference introduces a way to talk about the accuracy with which a device infers a reference function, which relies on a probability distribution over universes. Physical knowledge is weak inference with the added constraint that the device must answer "yes" in at least one universe and "no" in at least one universe for each possible question.
Overall I think that there is a central idea about how to define inference in the paper, and I think that the definition of weak inference contains that idea, so I am going to examine that concept alone. I will give an example below in which weak inference fails to capture common sense notions of inference. I do not think any of the extensions Wolpert proposes meaningfully mitigate these concerns.
Discussion
----------
I would like to know what some policy produced by machine learning does or does not know about. If my robot vacuum is unexpectedly tracking, for example, whether I am sleepy or alert, then I’d like to switch it off and work out what is going on. My robot vacuum should be concerned with floorplans and carpet cleanliness alone.
But merely having *information* about facets of the world beyond floorplans and carpet cleanliness is not yet a reason for concern. If my robot vacuum has a camera, then the images it records will almost certainly have mutual information with my level of sleepiness or alertness whenever I am within view, but this does not concern me unless my robot vacuum is explicitly modelling my state of mind. What does it mean for one thing to explicitly model another thing? That is the basic question of this whole investigation into what I am calling "knowledge". What would it look like if we took Wolpert’s notion of weak inference as the condition under which one thing would be said to be modelling another thing?
I think weak inference is probably a necessary but not sufficient condition for knowledge. The reason is that the functions X and Y can contain a lot of intelligence and can therefore be set up to make a system that is really just recording raw data appear as if it is inferring some high-level property of the world. Here is an example:
Suppose I have installed a video camera in my house that records 10 seconds of video footage in a rotating buffer. I will now show that this video camera weakly infers my level of sleepiness or alertness. Take the reference function Γ to be my true level of sleepiness expressed as a number between 0 and 255. Take the question function X to be the top-left pixel in the image at noon on some particular day, which we will interpret as asking "is my sleepiness level exactly x" where x is the intensity of that pixel. Now suppose that we have, outside of the universe, constructed a model that does in fact estimate my level of sleepiness from an image containing me. Call this external "cheating" function Z and suppose that it works perfectly. Then define Y as follow:
* If the intensity of the top left-pixel in the image at noon equals the level of sleepiness predicted by Z then output 1
* Otherwise, output -1
The pair (X,Y) now weakly infers Γ because for each sleepiness level between 0 and 255, there is a "question that can be asked" such that for all universes where that is the "question that has been asked", the "answer given" is 1 if Γ equals the sleepiness level under consideration in the universe under consideration, or -1 otherwise. Two things are going on here:
1. The answer function inspects the same properties of the universe that the question function inspects and in this way can produce answers that depend on the question, even though we are using a single answer function for all 256 possible questions.
2. The answer function contains a model of the relationship between images and sleepiness, but it is exactly this kind of model that we are trying to detect *inside* the universe. This gives us false positives.
The main results that Wolpert derives from his framework are impossibility results concerning the limits of inference within physical systems. That his definitions provide necessary but, it seems to me, not sufficient conditions for what common sense would describe as "inference" is therefore not much of a criticism of the paper, since the impossibility of a necessary condition for a thing implies the impossibility of the thing itself, and in general obtaining a result in mathematics using weaker starting assumptions is of value since the result can be applied to a greater variety of mathematical structures.
Our goal, on the other hand, is to positively identify deception in intelligent systems, and our main impediment to a useful definition is not false negatives but false positives, and so one more necessary condition for knowledge is not of such great value.
### A knowledge computation barrier?
Many proposed definitions of knowledge fail to differentiate knowledge from information. Knowledge can be created from information by a purely computational process, so any definition of knowledge that is not sensitive to computation cannot differentiate information from knowledge. Since this has been the downfall of several definitions of knowledge, I will give it the following name:
>
> (**Knowledge computation barrier**) Suppose that a definition of knowledge claims that knowledge is present within some system. Consider now the ways that this knowledge, wherever it is claimed to exist, might be transformed into different representations via computation. If the definition claims that knowledge is still present under all such transformations, then the definition fails to capture what knowledge is.
>
>
>
To see why this is true, consider a system in which data is being recorded somewhere. We would not, on its own, call this knowledge because the mere recording of data gives us no reason to expect the subsystem doing the recording to have outsized influence over the system as a whole. But now consider the same system modified so that a model is built from the data that is being recorded. This system differs only in that a certain lossy transformation of the recorded data is being applied, yet in many cases we *would* now identify knowledge as present within the subsystem. Model-building is a purely computational process, and it almost always *reduces* information content compared to raw data. Any definition of knowledge that is not sensitive to computational transformations, therefore, will either claim that knowledge exists in both the data-recording and model-building subsystems, or that knowledge exists in neither the data-recording nor model-building subsystems. Both are incorrect; therefore, an accurate definition of knowledge must be sensitive to computation.
Conclusion
----------
I think Wolpert’s framework is worth examining closely. There is an elegance to his framework that seems helpful. It assumes no Cartesian boundaries between agents, and grounds everything into a clear notion of a shared objective reality. In this way it is highly aligned with the [embedded agency ethos](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh).
It may be worth quantifying inference in terms of the circuit complexity of the function Y required to recognize what answer is given in what universe. Perhaps we could describe the accumulation of knowledge as a reduction over time of the circuit complexity of the simplest Y capable of discerning answers from the physical configuration of a system.
|
c5efc972-7a4d-49cd-af10-16395e0102d0
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Donation recommendations for xrisk + ai safety
Would be curious which top recommendations people have in the areas of xrisk and ai safety? Have donated to and considered:
* <https://funds.effectivealtruism.org/funds/far-future>
* <https://existence.org/>
* <https://intelligence.org/>
* <https://futureoflife.org/>
* <https://www.nti.org/>
* <https://www.cser.ac.uk/>
Any other top choices that seem potentially more underfunded and impactful? Feel free to share your own effort but state it as such, or other conflicts of interest
|
c7031942-e565-497e-b882-ee1d657d7a96
|
StampyAI/alignment-research-dataset/youtube
|
Youtube Transcripts
|
Logical Induction: Progress in AI Alignment | Andrew Critch | EA Global 2016
so this is joint work with primarily
Scott garabrant who is here Scott can
you indicate all right so this man over
here is like the primary source of the
ideas in this paper and then the four of
us have sort of like crowded around
Scott as a sort of carbon rod
arrangement to like absorb the neutrons
of like logical induction that have been
coming out of him into a controlled
reaction that like people can eventually
understand so you can I guess I'll
eventually have these slides or some
slides very similar to them probably on
my website so if you want to look at
them you can google Critch they might I
don't know if they might end up on
intelligence that order not but that's
that I'll just wait uggs wait a little
bit looks like there's like oh yeah fire
fire hazard we don't want to kill all
the people who understand logical
induction huh yeah let's do lights yeah
blue all right so rough plan for how the
stock is going to go is first i'm going
to say what logical deduction is seems
important then i'm going to say a little
bit about why we care about it because
it's intrinsically super cool like you
don't you almost need to care about
anything to care about logical deduction
but it also we came to it from from
caring about long-term concerns about
the control of highly advanced machine
learning systems or artificial
intelligence and so we'll talk about
that then I will I guess probably I
should be over here in order to be able
to point with my left hand since this
thing does it work so then I'll then
we'll get into like the technical stuff
and then I'll like try to save a little
bit of time at the end for
interpretation like what does this all
mean who are we in the universe okay so
here is a little table that I'd like to
use to illustrate what we mean by
logical deduction so suppose
I asked I I've got a ten sided die like
this and I roll it and I ask you what he
what's the problem I role that I cover
it up and I ask you what is the
probability that that died landed seven
so that's probably détente equal 7 and
if you think about it for a minute
you'll say ten percent right oh yeah can
you guys can you guys nod if things I
say makes sense and like make weird
squinty faces yeah okay cool make weird
squidgy face as if it doesn't make sense
alright so if you think about for a
minute you'll say ten percent and if you
think about it for a day you'll still
say ten percent if I've still covered as
a dye is still covered you will still
think ten percent and no matter how long
you think about it you'll think ten
percent at the opposite end of this
spectrum you can have a question like
hey what do you think is the chances
that the tenth digit of Route 10 is
seven so if you think about it for a
minute you'll probably say ten percent
because you don't know right but if I
let you think long enough and maybe if
you've got some paper you might be able
to figure out that it's probably not
seven like you lose some calculations
and you'll be like I don't think it's
seven but like maybe I screwed up but I
really don't think it's seven and then
if you think for infinity which I
apologize sorry for leaving you in the
room with the paper but you will
eventually converge on an entropy zero
because it is not seven and then there's
something in between which is what we
see going on in machine learning all the
time which is where you've got some
observation like maybe right before the
die was covered by the cup you got to
see like you know you got to see it for
a brief moment while it was like
tumbling does this make sense for this
picture I don't know if you can see a
little bit of blur there right so you
see it and at first you don't have time
to think about it you're like yeah ten
percent right but if you think about it
longer you can like run some simulations
if you're a human or if you're in a I
you know you might run a bunch of
simulations and realize actually based
on what I saw going in it looks like a
little more likely to be seven because
seven was like looked like it might come
up does that make sense so you have to
think about it to change from this ten
percent to this fifteen percent and if
you thought way WAY longer maybe you
could get
even more precise but eventually would
like top out like you maybe still don't
have enough information about like the
shape at the table under the cup or
something so you still can never be
quite sure what how it ended up so this
is a this is like a mixture this is one
kind of uncertainty you could call it
empirical uncertainty and it refers to
uncertain do you have because someone
didn't tell you some information or you
didn't observe something in the world
this is a different kind of uncertainty
its uncertainty not because someone
didn't show you something but because
you haven't had time to think it you're
not waiting to do an experiment or see
anything in the world to know the digits
of route 10 you just need to think about
it and then this is something in between
you see something and you have to think
about what you saw okay so probability
theory if you go on Wikipedia and look
up probability theory you'll have see
some axioms that tell you how these
probabilities right here should relate
once you've fully integrated all the
information that is presented by your
observations there are certain coherence
laws that probabilities have satisfied
like probability of a and B should be
less than probability of a right things
like that are probably of a and B plus
probably a or b has equal probability of
a plus probably be you guys seen stuff
like that right so there's some rules
the probabilities have to satisfy but it
takes a little time to satisfy those
rules you have to think you have to
think about stuff to like figure out
what your probabilities are going to be
and while you're thinking you know you
might not satisfy all the rules you need
like some time to like get your
together and have coherent thoughts so
logical uncertainty and logical
induction is is more about the
horizontal movements in this table so
I'm going to refer to so there's two
there's two interesting questions you
could ask that are not answered by
probability theory one is that like what
are what are like good ways for
probabilities to change over time like
what's the definition we probably theory
sort of tells you like the definition of
like good coherent beliefs once you've
thought about stuff but what's the
definition of like a good way to change
your beliefs over time as you think
simulation predictable so the
interesting thing here is you're not
accumulating any information you've
already accumulated the information
right here when you've seen the snapshot
and now you're just thinking so that's
really important distinction the like
the step from here to here would be
accumulating information or rather from
here to here right whereas in the
information theoretic sense information
refers to you not observed random
variable and here you've already
observed as much of the thing is you're
going to observe and there's this other
thing you have to do like so you see the
snapshot and then you have to think to
like process the information yeah then
did you speak when you said for snaps or
two maybe you ran some situations well
depends on how you think right so maybe
you think in simulations maybe you think
with some other heuristic maybe you just
write down some differential equation
and just say I don't know how you do it
maybe you just feel when you humans like
just kind of feeling something out-like
how does it look like it was going to
fall it's kind of like a simulation your
head yep yeah wave throwing that
dividing line between say computing that
gets you to have you seen the blurry
seven different we know the photons to
your menu representation versus the line
of now I have the information yeah you
can you can to draw the line and but the
interesting thing is in our theory that
you will get into later you'll see that
you don't have to draw the line that's
fair to say you can mix and cook when we
start mixing and axioms with
observations or something we're not
going to really talk much about it but I
think the answer is that you don't have
to build the line so there's one
question is like what are good ways of
probabilities changing over time when
you're not making the observations
you're just thinking about stuff you've
already seen ok so what rules should
these things satisfy then there are any
really there are many rules in the
literature or papers about like what's a
good way of doing that and another
question is obviously there's something
intuitive about fifty percent being like
a bad answer here right if I'm asking
you to guess probability of a digit
you're going to probably use ten percent
until you've had some time to think
about right so is there some theoretical
framework that we can we can define that
would that would imply this intuitive
judgment of hours that
you know fifty percent would be like a
dumb answer to start out with here and
like ten percent would be a good answer
to start with right and then you you're
still going to have to think to get 20
or 100 eventually because this is
completely determined by your
observations the answer is completely
determined by things yard see okay so
does that make sense logical this like
logical induction stuff is about like
hey what's a good place to start and
like what's a good way to evolve your
beliefs over time when you're not
getting the information yours processing
what you've already seen cool yeah for
the fifty percent thing isn't that just
like equal outcomes number of outcomes
yeah so you could say you know if
there's an equal number of outcomes
maybe they should all have equal
likelihood but should they I mean why
not just have seven as fifty percent
likely to be true and the other nine are
each you know 50 over nine percent
likely be true like you'd need to make
some argument and you'd like to
formalize that argument we like that
argument to be like written down in math
so we can do proofs about it cool so
that's what we're gonna be talking about
and I like to say that well we'll get to
that so the question is now why do we
care about this thing that's the quote
the question is how do we do the purple
stuff and why do we care well the reason
I care and I would you know different
people at Mary might have different
nuanced descriptions of like why they
care but we want to reason about highly
capable AI systems before they exist we
want to develop theories of safety and
control and alignment for very very
powerful thinking machines that could do
a lot of good for us if we like a name
to move the right direction and it'd be
great to have those like aiming theories
before the super powerful thinking
machines are like a round so we'd like
to have theories of AI very very strong
AI systems before they exist or rather
like how to use them and how to direct
them so so we want to how can we it's
this hard problem right you often hear
people say like we can't really work on
a I love it because we don't have AI yet
we don't have highly advanced machine
learning systems we don't have and if
you're in jessica's talk I'm sure she
may be convinced to you that there are
things we can talk about now but if
you weren't I'll convince you that
there's even other things that you can
talk about now because we can model a I
whatever it is it's going to be good at
stuff we could just develop a theory of
things that are good at stuff okay and
then try to like decide how we're going
to aim the thing that's good at stuff
for various values of stuff so what do
we think what do we already have
theoretical models of good at stuff for
well choosing actions there's this
rational choice theory I don't know if
you've heard of the Von I'm raise your
have you heard of the by knowing
Morgenstern theorem awesome right so
we've got this theorem some of the spark
yes spark students know about it so
there's this theorems that tell you you
know if your if your choice is between
Gamble's or your choices between random
outcomes satisfy some very modest
conditions that seem very
unobjectionable then there's a function
whose whose value you're maximizing the
expectation of so there's this like nice
theorem that helps you think about what
is it you know an agent or a system
that's really good at choosing between
actions well if it's really good at
choosing between them in a way that's
like not self contradictory it's going
to follow this vm theorem and you know
modulo some adjustments to the BNF
theorem that might want to make but part
of the vm framework is this belief thing
that the agent has right so we had this
rational choice theory and you've got if
you look at the axons of vm there's
these probabilities that it believes ok
so now we need really to catch this out
we need a theory of like how
probabilities and beliefs might behave
and we have that called probability
theory and there's nice theorems like
bayes theorem to tell you how when you
get new information how your beliefs
should adjust after you think about that
information so this is sort of what
you're thinking of earlier when you're
saying yeah accruing the new stuff but
similar to how rational choice theory
depends on probability theory to
formulate it to be formulated or to be
realistic probability theory depends on
this thinking step this step where you
got the information and now you have to
like figure out what your new
probabilities are and so implicitly in
probability theory there's this like
missing component which is
you know how do you figure out what your
probability howdy what what do you think
before you're done thinking what are
your beliefs before you've finished
figuring out what they are and there
wasn't really a theory for this there's
been like various fragmented attempts at
like making such a theory but we'd like
to say an AI is going to be good at
rational choices it's going to be good
at probability theory and it's also
going to be good at whatever the heck
this purple stuff is so we'd like to say
good we like to define good at purple
stuff so that we have a model of a I
before it exists and there's all kinds
of other features of AI that we'd like
to have models for and this is just one
such feature good at purple stuff good
at logical induction and so the way that
we're going to use these words logical
uncertainty is like the state that
you're in before you finish figuring out
the logical consequences of your belief
of your observations so you are
logically uncertain about the digits of
square roots Hannah you know the
definition of square root 10 so you know
everything you need to know in order to
logically infer the ditches over 10 but
you're logically uncertain because
you're you haven't finished doing the
logic that you need to do to figure it
out does that make sense so largely a
logical uncertainty is the state that
you're in and then logical induction is
how you deal with that state it's how
you refine your relief over time cool so
any questions about like what this means
and like why we care about it before I
go on to stuff yep you mean like a few
like a future AI system that's like not
that good at logical induction yeah so
there's different ways it depends on
like the nature of the not good at it so
if you can say a particular failure like
a particular bias that will have then
you can say okay it's going to make
these types of errors but even to define
a bias you need to say what good
judgment is and then a bias by
definition is like deviation from good
judgment so we can't even like before we
define good judgment we can't even begin
to start to talk about your question yep
Scott Garrett Brandt was too young I
don't know
that's my answer uh uh yeah I mean I to
be honest I didn't I had I I thought
about this a lot in high school I had
lots of math edition friends it makes it
like when you're when you're you're
seeing some talk about the Goldbach
conjecture you know when someone says
like oh we discovered like this new
class of numbers that are all
expressible as sums of primes right your
belief in Goldbach conjecture goes up
right but like how much should it go up
right you've observed no new empirical
information you knew what primes were
you knew the definition of addition so
logically go back unless its independent
infer you know it is just it's just a
logical consequence of stuff that you
know nonetheless you go around adjusting
your beliefs about it in response to
other other like you can think of the
mathematical community itself as like a
an algorithm that's like proving stuff
and discovering stuff and it's like
updating its like intuitive beliefs
about stuff and math editions a lot of
my friends in math at some point have
this crisis of like what are we doing
like what are the principles we're
following when we have these like fuzzy
beliefs about which theorems are true or
false before we even prove them so then
you go through this crisis of like
trying to define what are the good
principles for like having your beliefs
about conjectures and then you give up
because you're not scared Scott
garabrant so this happens anyway one day
Scott garabrant doesn't give up and we
have logical inductors so so you could
try to define what does it mean to be
good at the purple stuff good at the at
refining your beliefs while you think
about them and a lot of people have
thought about this before there's
various criteria in the literature that
sort of just you're at like well here's
one here's one property that we expect
good logical on certain reasoner's to
have we want them to be computable we
want them to be overt we want them to
eventually satisfy probability laws
after they think for a while so if a
implies B eventually we want the thing
to have probably very less and
probability be so there's all these
other properties I'm not really going to
get into them because we'll see them in
a lot of them in finer detail later in
the talk but
want to sort of point at you know since
really at least at least as far back as
the as the 40s you've had people talking
about criteria for good logically
uncertain reasoning and but there there
are sort of informal kind of like you
saying like oh like you know equal like
bunch of outcomes should all be Khalil I
CLE until you've had reason to think
otherwise but you have to figure out
first of all that the outcomes are
mutually exclusive in order to even
apply that so there's some thinking you
need to do first and so part of part of
the progress that we've made is we've
adopted a formalism where we can state
versions of a lot of these properties
and then check if their possible or not
or we can prove that they are approved
that sometimes they're mutually
incompatible so that's already a step
forward is is that just the framework of
talking about a common language we're
talking about all these sort of like
informal approaches or different formal
approaches to logical uncertainty in the
past and i think there's lots of
interesting questions about applications
like how we're going to use this and I'm
going to defer those until after we've
concretely seen the math because it'll
be much easier to like point at the
object once we've once we've seen some
math so I guess I've already had any
more questions about just like what the
logical uncertainty thing is I think
we've had a bunch already so seems good
okay so I believe now my slideshow is
going to tell me to switch to Beamer
where I can do math well that's the
wrong here we go this is a talk I gave
its park this is the logical induction
one okay so the way the technical part
of the talk is going to go is I'm going
to formally state the definitions of
illogical induction and some desirable
properties of it we're going to talk a
lot about the properties we have we have
an algorithm and a criterion that
implies a bunch of nice properties and
the algorithm satisfies the criterion so
that's nice and I'll talk about those
and then i'll talk about formatting the
which is actually harder than requires
more definitions than talking about it's
a lot of its properties will talk about
the algorithm a little bit maybe time
permitting and then I'll switch back to
PowerPoint and talk about like some
conclusions so let's get to the
formalizing we're going to choose a
language or a formal system of logic
gamma for encoding statements about
variables and computer programs so he's
heard of piano arithmetic in this room
awesome okay so piano so we got those
like that seventy percent so piano music
is a language we're talking about
arithmetic but it turns out that you can
encode lots of statements about computer
programs in piano arithmetic and then
prove stuff about them so it's a very
expressive language we run our language
to be pretty expressive because
obviously we want a eyes to be able to
like think about computers I think or we
want our theory of AI is to include a
eyes that think about computers and then
this capital lambda is going to be
instead of all sentences expressible and
gamma and then a belief state we're
going to define a belief state that's
the state that you're in while you're
still thinking about stuff is a map from
sentences two probabilities that is
constant outside of finite set and I
think we've toyed with different
definitions where it's constant outside
the finite subset or zero outside the
finite subsets or doesn't matter the
point is if you're kind of writing down
your beliefs you're only going to be
able to written out like a finite number
of beliefs at any finite time so think
of your belief state as just like a list
of probabilities that you've managed to
assign to stuff so far and we don't want
to say at this point in talking about
the theory that your belief state is
like whatever you implicitly believe or
you're like revealed beliefs or your
reveal preferences they're like things
that you've managed to explicitly write
down because if you define your beliefs
to be something that's derived from your
preferences or derived from actions or
derived from stuff that's already in
your mind then then you're no longer
separating the belief in the computation
and the whole point of this framework
here is to is to separate the belief and
the figuring out of the belief so your
belief state is just the stuff that
you've written down probabilities for
and a reasoning process is just like a
sequence of belief states so at each
time step you like adjust your
probabilities to all the sentences that
you've seen so far I maybe
you add some more sentences that you
have beliefs about is that make sense
cool encourage about that sweet so now
with this definition we can say what is
a good reasoning process so some pro you
know properties you'd like to have for a
good reasoning process is that it should
be computable at least our model of it
should be computable we would like it to
converge we'd like this thing not to be
like flip flopping around forever
changing its mind about stuff like we'd
like it to be eventually figuring out
what it thinks about a thing so that's
convergence and that that means to say
that you know we've got we've got the
symbol P sub n of Phi so that means like
the the probability that you saw at time
n to sentence five or feet should i say
i'll say fee and we want as n approaches
infinity for that limit to exist and
we're going to denote it by p infinity
so it's what you will eventually
converge to thinking after forever you'd
like the limit to be a probability
distribution so once you're done
computing your belief update at the end
of time you'd like to satisfy the laws
of probability finally so that's these
kinds of rules and there's this other
property that people have talked about
that i think is desirable which is that
if you haven't if you never disprove
something you shouldn't think it's
impossible so if you and if you don't
prove it you shouldn't think it's one
hundred percent so that's this what this
says if gamma does if your logical
system doesn't prove something then
you're not going to be hundred percent
sure but does that make sense so these
are some nice properties and in fact we
knew before garabrant induction that
these properties can be satisfied guy
named abram demsky had an algorithm that
satisfied these and the paper that is
sitting in draft form on that table
right there shows that those properties
are related so there's a there's a
single property that I'm going to call
the garabrant induction criterion which
implies all four of them so that's kind
of nice it means that we it like
intuitively we were just as humans just
kind of generating these desirable
properties right and you might think
that the only thing they have in common
is that humans want algorithms to have
them but they have another thing in
common which is that they follow from
the gear Bretton
action criterion and that sort of
validates us as intuitive generators of
criteria that we managed to generate a
sort of coherent set of criteria to be
satisfied us as in humans like all the
people who've been working on this field
it also shows that the properties are
feasible because we have an algorithm
then i'm going to call logical deduction
logical induction algorithm 2016 or Lea
and I'm putting the 2016 in there
because like you never know people are
going to update these things people
might pick this up after we're done with
it and make it better and I don't want
to steal the name logical induction
algorithm forever someone else might
make it better so so we know that the
those things are feasible and also their
extensible there's a bunch of other
awesome properties that follow from the
gab reduction criterion and are
therefore satisfied by the algorithm
that we have so that is the deal that's
like what's going on here we got
algorithm that does the purple stuff
good okay so here are some properties
that the algorithm satisfies its it's
actually faster to stay the properties
of the algorithm then it itself or even
the the criterion but we'll get to the
criterion I just want it before it's
kind of like before I start giving you
this like big fat definition of an
algorithm or criterion it's nice to like
care right so let me tell you what
properties at have I don't think you
should care what the definition the
algorithm is or the criterion before you
know what properties it satisfies
because otherwise you'd be like whatever
so there's this non dogmatism property
which is that it's like a stronger
version of the dogmatism property from
earlier which is that if you have any
algorithm to output sentences and it
outputs this like infinite list of
sentences that are consistent so in an
au pair of them are contradictory or
contradictory with the theory gamma then
there's actually some constant such that
all the plot all the sentences in that
sequence are above probability epsilon
so it's stronger than the earlier
property and here's another
strengthening of the earlier property
which everyone loves kamagra of
complexity you can actually say for a
given sentence if it's never just proven
then it's probability is going to be at
least some constant times 2 to the minus
its complexity so there's a nice
relationship between how complicated the
thing is and how likely it is
and that's why we called ockham bounce
because alkem is that you know the
Occam's razor you should you know if
something's real complicated and believe
it well that's not quite right because
of a real complicated disjunction you
should totally believe that all right
real real long disjunction is probably
true so you need to amend it a little
bit there's some relationship between
complexity and believability and like
this is probably the good relationship
that you want and so and again these are
all talking about the limit right these
are all talking about the probability
bution that our algorithm will converge
to we're not yet talking about what it
thinks along the way yeah question it is
sharp no let's oh there's a there's a
see here yeah yeah yeah I'm sorry yeah
so I guess you could take this to
premium overall see such as this is true
and then it would be sharp like this d
we can prove it for is probably like not
sure Yeah Yeah right right right yep um
yeah any other questions about these
properties actually cool um what today
I'm gonna I get it I gotta rearrange
something because this phone over here
should totally be in my left pocket
where my left hand goes alright so
there's some nice properties now here's
where it gets exciting because people
have been wanting like math to
understand math for ages like Hilbert
was like hey can we like put math on a
mathematical foundation and girl comes
along and is like no and then put luck
comes along and instead sort of I don't
know if you're familiar so there's some
results that say that logic can sort of
believe that it's like sort of
trustworthy up to a certain bound but I
I think that well yeah these are real
these are real cool so our logical
inductor as will call our gear and dr.
it is going to believe it's going to
come to believe in the consistency of
the logical system that it's using
so if we define con n is there is no
proof of contradiction using n or fewer
symbols in our logical system then
eventually at time step in our gharib
and dr. believe any algorithm satisfying
the gear and adduction criterion is
going to is going to believe in the
consistency of gamma up to that point
and in fact there's a stronger claim
which is I'm going to call belief in
future consistency you can actually
choose any computable function so it
could be n to the end to the end to the
end to the end to the end whatever some
crazy function eventually on time step
in our reasoner is going to believe that
gamma is consistent up two proofs of
length F of n so this is like an
extremely strong belief in consistency
that you can basically pick your
favorite computable function and our
algorithm is going to believe in
consistency up to that function those
very very strong and you can put you
could view this as self reflection or
reflection on the deductive process that
it's using but here's a more explicit
self reflective property this is
basically it knows what it notes so at
time step n event for large n at time
step in a garabrant inductor knows what
its own beliefs are so roughly speaking
if the probability if you have some
sequence of sentences that are I'm going
to call this poly time general in fact
I've got a definitely we're going to
elaborate on this in a later slide but
it's beep it's like a you have an
algorithm that takes it in and spits out
a sentence in time prop all anomia linen
so we've got this poly time general
sequence of sentences and at time step
in our our inductor is going to if its
beliefs are like kind of squarely
between a and B like a and B between a
and B by like this epsilon margin does
that make sense it's like forgivingly
inside to the AV interval then it's
going to know that its beliefs are
actually inside the AV interval with
high very very high probability
it's going to be like almost complete so
it's kind of like saying if it's between
89 and 91 % share of the Riemann
hypothesis then it's going to be
ninety-nine point nine nine percent sure
that it is between 89 and 91 % sure of
the Riemann hypothesis me quite certain
of what it thinks yep ya know it is
written down on Tom sub n it has written
down this is my probability already yeah
yeah yeah so at any at any finite time
step it's only going to have regression
up to a certain turn in yeah but like
yeah a time step large and it's going to
know that it knows it knows it knows
some finite number of notes yet yeah
great question and so now when you get
self-reflection like this you've got
something that believes in its own
consistency you got something that knows
what it knows this is like ripe fertile
soil for like paradoxes right we can so
so we should all be getting nervous but
in fact this thing has like a very
strong systemic immunity to the liars
paradox you can define it you can find a
sentence that basically it's equivalent
to saying our garabrant inductor doesn't
believe me okay you write down a
sentence that says pick your favorite
rational probability P you can write
down a sentence that is equivalent to
saying I am less than P likely to be
true so P could be like one percent I am
less than one percent likely to be true
and gay brand dr. just handles the
situation it's just like no problem I
know what I'm going to do it at times
the probability P in the limit and on
the way to the limit it does a crazy
thing where it like is constantly flip
flopping its belief right around
probability P so that it's less than P P
of the time
yeah so good yeah okay so good all right
i'll just say that again so you make so
you make a statement that says i am less
than P likely to be true i'm going to
say p % because if it's natural language
better okay so i am less than p % likely
to be true and then over time its
beliefs about that statement converged
to probability p but they like awesome
over time they like oscillate just
around p so that they are actually less
than P P presented the time and they're
actually so it works out yeah so you
don't get a paradox and it's awesome I
just cannot yeah in fact people are
already like seeing what happens when
like agents that try to make themselves
unpredictable by like trying to protect
you can make an agent from garabrant
inductor that tries to make itself
unpredictable by first trying to predict
its own actions and then having a plan
that if it can predict its own actions
it will do the opposite and by virtue of
that counterfactual plan it can't be
predictable and then you get random
behavior okay anyway it's real cool so a
large paradox resistance it's real cool
and moreover you can go so far as to say
that this thing kind of trust self so
this is another these are properties
like that mary has been sort of pining
for for like years you know just like
how can we have a model of an agent that
like trust itself trust its future self
understands itself knows what it knows
etc and the Taliban just like does it
all so it trust its future self in the
following sense we have a result that is
roughly interpreta belen quote those are
those are human quotes not girdle quote
these are the girdles and those are the
human code so we have a result that
roughly says that the problem oh that
should be a sub n their apologies but
the probability at time step end that
you assigned to a statement feast of n
so again sorry I'm assuming that we've
got this sequence of poly time general
sentences and I'm going to harp on the
meaning of those
I think coming up soon yeah the next law
is going to like harp on how important
that Polly time general thing is we got
some sequence of questions we're asking
the inductor this polynomial time
generate a ball or general and roughly
speaking it has some property like this
that the probability that assigns to fee
given that its future self so f of n is
some large number given that its future
self that signs probability at least p
is it please p asymptotic lee so this is
like saying if future me think that you
know we're ninety percent likely to live
on Mars then I'm ninety percent likely
to thing we're going to live on Mars cuz
future meets tomorrow he's had more time
to think so conditional on future me
believing act I believe act yep given
that you don't know what future me is
going to believe what explanatory power
why why right so wise so this property
is good because you want this thing to
be able to reason about hypothetical
actions it might take like right like if
someone says Critch don't go to Berkeley
because the cognitive science
departments gonna like suck you in
you'll never do math again I'm like well
if future methinks cognitive science is
a good field of study it's a good field
of study so I'm just gonna go to
Berkeley not worry about it that makes
ten like I have this confident that if
future me thinks it's a good idea don't
worry right so a future me think
something is true yeah probably true so
this is conditioning on what future me
thinks it's different from knowing it's
like the post future me thinks it's at
least probability P then I then it
probably at least it is a conditional
probability getting that like you get
nope um the cool thing is the inductor
is an algorithm step one it's an
algorithm so a few algorithms can be
written down formally mathematically
step 3 this algorithm has beliefs about
every mathematical statement including
statements about itself so you can just
write down this statement that's what
the gunner codes mean it means like
mathematically represent the statement
me assigns at least probability P n and
then asked it conditional on that what
do you think the probability is and it's
like well at least that now this is in
quotes because it's not quite true for
example there are denominators like
things could go to 0 it's a bit tricky
so the actual result looked like this
and there's a bunch of definitions that
I would have to say to make that true
but you can like just stare at it a
little bit get a sense of the shape of
it q you left but I mean this is how I
don't know when I read a paper the first
thing I do is figure out the shape of
you know the results in it so it's
something like there's some indicator
function for whether the thing is true
and you multiply the binary variable
that indicates whether this thing is
true by that thing and then you take an
expected value which I haven't defined
at all but there's expected values of
logically on certain variables which is
cool so anyway that's there's some kind
of future trust thing going on here
which is really great because we really
wanted agents that can like think about
reason about their future and like these
hypotheticals where it thinks its
features like it's not going to it's not
going to try not to go to Berkeley
because it's afraid of getting sucked
into cago design you know it's gonna be
like well future me thinks that's a good
idea it it questions about that actually
this is like maybe kind of deep maybe we
should have have let's see lots of like
Steve lots of like faces doing that so
maybe I should just let you guys do some
logical induction right now before I
believe is that you more whether you're
actually endorsing this as a coding for
human oh no I'm right I see ya so I'm
not telling you to like buy lots of
addictive substances and put them in
your house and trusted feature you will
make good judgments about them thank you
yeah I yes like super cruise the pattern
notching that looks sort of like koshi
short put the other way around there is
a yeah there are like oh yeah there are
these like sequences of like bounds on
stuff that are really what's going on
yeah for some valleys of what you meant
you could be right yeah you can think of
that it's like based here
the denominator is multiplied the
state's here but like the definition of
conditional probability yeah you like
multiply this thing all right yeah but
but again I don't I just want to play
Sarah if you want to know more you have
your other paper it's right yeah okay so
that one probably I don't know I don't
know quite yet I was at a math camp with
these guys so I don't really know what's
in the abridged version yet because got
cut down while I was like away so uh but
you notice I keep bringing up these like
polynomial time general questions right
so hopefully you care now when I give
you this definition because you've been
like irritated with it for a while
already so the interesting thing about
sentences that are general in polynomial
time is that they're easy to generate
relative to like other more complicated
complexity classes but they can be
arbitrarily hard to answer so there are
polynomial time algorithms that can
write down questions or statement of
uncertain treat value that are just like
I want to say uncomplete ibly hard to
verify but what I really mean is hard to
verify than any computable function of
the thin number ah lek date taken I
don't know if humans were generated in
polynomial time actually well I don't
know if you use polynomial x agenda
generator um uh yeah I don't well here's
one here's an example in the next slide
so if you let g of n be the statement
there is no proof of g of n in fewer
than f of n characters pick your
favorite giant function f ok and then
make a sentence that says this sentence
cannot be proved in under F of n
characters for each n now you might
wonder how can you make a sentence that
refers to itself that's a trick called
cantor's diagonal emma and actually
there's a version of it that lets you
have this parameter and in it so just
trust me you can look up and boulos
there's ways of like actually writing
down sentences that are provably
equivalent to the self-referential
predicate
and you can see if gamma is consistent
it's not going to be able to prove this
in fewer than f of n characters all
right suppose that I've got a sentence
it says I'm not provable in less than
1,000 characters ok well just as you
can't prove me in under a thousand
characters you can prove in a thousand
one characters all day if you want yeah
after a thousand characters you've
proven that thing is true this thing is
true there's a proof of it but it needs
to be very long think about do logical
blush all right so so this statement is
true right if gamma is consistent this
statement can't have a proof in a short
proof and that's what it says it says I
can't have a short proof so that makes
it true so in fact because of proof you
can actually write down algorithm built
search for a proof of this that
algorithm will provably terminate so
there is a proof of this thing and it at
the very least at the very most as long
as like an exhaustive search of all
strings up to whatever length you need
to prove it so which is not F of n could
be much could be much bigger than F
event but this thing has a proof it's
true but it's proof is crazily badly law
does that make sense but it's very easy
to write down you write this down in log
n time you can actually write down a
template for the sentence and just like
keep stuff stitute in longer binary
strings for in right and then this thing
just gets crazily hard to verify and you
can roughly see that so nonetheless we
have this amazing property called prove
ability induction and permeability
induction says that if you can generate
questions if you can generate statements
in polynomial times that are approvable
it doesn't matter how awfully long their
proof get a garabrant inductor is going
to catch up to your Denton's generator
and start believing the sentences as
soon as you generate them
this warrant pause okay I'm going to
like draw a like kind of cartoon
depiction of this thing because i think
it yeah this is this is a problem this
is a property that we were not looking
for that we just got him got it just
showed up and we were like well really
or at least I was just like what so so
we've got our sentences feast of one
piece of to feast of whatever and
they're all they're all approvable but
would just like crazily batted long
proof so at time step one imagine our
garabrant inductor looks at sentence
number one so imagine the Gare brand
doctors running along over here and like
Ramanujam is like running along over
here generating his super hard to verify
conjectures okay and Ramanujan's fits up
piece of one and gharib and daughters
like man I don't know fifty
percent I don't know if that's true so
then at time step two it still has a
belief about v1 and it's like I I don't
know it's hard I don't get the thing and
then it also has been asked another
question which is piece of two and it's
like yeah I don't know that either okay
just let me think please okay so this
ends is like coming on there's this like
onslaught of conjectures feats of n
right these mathematical conjectures
that have presumably being generated by
some here like intuitive algorithm that
uses heuristic to generate true things
or maybe some like concoction like I
just made that guaranteed to be hard and
eventually one day you know at time step
like you know 10,000 it's like whoa hang
on hang on guys that was true I figured
it out all right so it's like a hundred
percent or like something very close to
a hundred and then and then you're like
all right so it's finally caught on to
the first thing it believes the first
thing and then you're like what about
the second thing and it's like I don't
know that's all I'm thinking about it
okay it's still like roughly fifty
percent and then like much later maybe
like p a million it starts to believe
both the first and second sentences ten
to the six I should have there
does that make sense by time step a
million it is managed to verify like the
first two tenses and it's still super
uncertain about these months so this
look do me right it's like it looked
like the number of sentences has
consider right it's like going to it's
like getting linearly wide but like the
difficulty of verifying them is like
super narrow right so it looks as this
tiny little cone of things going to
believe but no one day this just stopped
and it catches up so that like time
steps duper large in I don't know maybe
it's a google I don't know how big a
number you need it's going to depend on
the particular algorithm but it's
actually going to catch up and just
believe it's going to widen out as it
going to believe even even David number
n as soon as statement number n gets
generated it's a polynomial time in n so
in each time step Enders an algorithm
that spits out or news 10 and runtime is
polynomial similar in some way yeah yeah
they're following some kind of pattern
exactly so it learns that pattern yep
right you've got some algorithm for
spitting out statements that happen to
be provable they're not quite attempt
they don't have to be quite a template
in the in the predicate sense like they
don't have to be a first-order predicate
with is it with an aunt in it and you
keep bugging different ends that the
nature of the state could look very
different but they do have to have
something common which is that there's a
single polynomial time algorithm that's
generating them and the point is that it
catches on to that pattern if there's a
pattern that it can catch on to then it
does yep does it does it automatically
assign the left one 107 like very close
to a hundred percent not actually yeah
for every absent for every epsilon there
exists a large and felt like eventually
over you yeah well you this is the
statement right here right so for every
epsilon there exists a large and such
the non time step and it's on to
probability very near one with it Nathan
f1 yep okay like know that
no it doesn't know that someone's out
there with an algorithm we might yeah it
figures it performs induction yeah yeah
unfortunately mathematical induction was
a term already taken to refer to
deduction so we had to we had to call it
something else yep a little bit later
about how your mission in this system
right here we're examining there's no
information entering it yeah this is all
in between like I've received some data
I've written down some facts about the
data and now I'm like thinking about the
consequences as those fact no new
information yeah that's the whole point
of this framework yep you could you
could make modifications where you like
throw in extra information while
thinking and that'd be like future
papers but like we're well prepared to
like have versions that that that do
that yeah only when they're when they're
all provable now great question so what
happens if like half of them are
provable right yeah so it does a great
job then too so let me get to that so
we're going to have lots of limits so we
made this notation this sequence is like
roughly the same as this sequence means
that their limit difference is 0 this is
a common notation for asymptotic this
one's like roughly bigger than that one
if asymptotically the limit of the other
one minus the first one is our second
one is zero or more so the this
situation where the inductor catches up
to the pattern that's generating the
question we're going to call that
assigning its beliefs in a timely manner
like this is the timely belief you know
at the time that the time that you might
ask get the question the belief it
assigns at that time is a time of timely
belief it's the one that it managed to
generate on time when the question is
made Wednesday who's the next person in
this room by the way
cool so there's this cool brother cool
property which probably this will be the
last cool property to build say in
detail but it says if you have a poly
time general sequence of statement then
if the logical inductor is going to
converge to some pattern of
probabilities that it assigns to those
statements that also poly Tom general
then actually you don't have to wait til
infinity it's going to start assigning
those probabilities in a timely manner
so anything that's any poly time pattern
that it's going to believe at time
infinity it's actually going to believe
along this diagonal here eventually so
it's you can prove lots of cool stuff by
first proving it using tinfinity and
then saying and now there's a large n
where it work so now I have to cool
great so what I'll do is I'll conclude
in such a way that people feel
comfortable leaving the room for a break
and then I can stick around during break
and Nick answer my question so oh yeah
we can't have break in here got got it
makes them makes it I'll just bring my
laptop all right so you know there's
this question of like what happens if
half the sentence is being generated or
true we have theorem today you know that
it at least does well so it might
actually figure out which ones are true
so it doesn't have to say 50 percentage
it says these ones are true and these
ones are fall but if it can't do that
then it's good you percent and we need
to define what it means to can't be able
to do that but if you can't figure out
the pattern and it starts falls back to
using probabilities it learns to be
coherent we had a thermos says that it's
coherent at Tom infinity meaning it
satisfies the laws of probability theory
but actually you can write down what it
means to satisfy a law of probability
theory and show that it approximately
satisfies laws of probability in a
timely manner like on this diagonal
effort for large n and so there's much
nice properties they're going to during
the paper they're going to be more and
more papers
and so the formalization of the
algorithm is basically finance you just
make a stock market of traders that are
betting on the sentences then you
imagine that market and then whatever
the market believes you believe that so
the algorithm is actually very simple
and the criterion is very simple the
state which is part of what makes it so
beautiful it basically the garabrant
induction criterion the single criterion
that implies all those myriad lovely
properties basically there's some
definition of trader which warms my
heart because I used to be a traitor and
it says that you are good at logical
deduction if any trader who's not
willing to possibly worth that risk
losing more than a bounded amount is not
going to be able to make infinite money
from you so if you walk up to a gear
bratton ductor and you promised yourself
you're never going to risk losing more
like going negative a million dollars in
debt if you promise yourself that you're
not going to make a million dollars
betting against it that's just the
definition and from that one definition
you get all those amazing property so
pretty cool I think and we have the
algorithm I think it's a financial
solution to the computer science problem
of meta mathematics and time for meeting
i will use a whiteboard to answer stuff
so probably i'll take the whiteboard out
to whatever it is that i would be able
to field questions and what does all
this mean so fish from femur to
powerpoint so my summary of what i think
just happened at mary is that we had
rational choice theory we had a concept
of an agent in it Calvin or Morgenstern
agent we had some nice axioms that
implied that type of agent we had nice
arguments for why those actions are
important if you don't satisfy them you
lose a bunch of it right and then we
have like various formalized versions of
VNM agents somewhere the utility
functions are reinforcements some aren't
then we have probability theory again we
have a notion of an agent some axioms
that you need to satisfy some like
arguments to say those are good axioms
all right if you don't satisfy let me
lose money
then we have like a formal thing called
salma induction that just does bathing
updating and you know it's possible to
do computed ly because there's a thing
so now we have another row to the table
we've got logical uncertainty theory
which can be used to like implement
probability theory which can be used
implement rational choice theory we've
got an agent concept called garabrant
inductor we've got arguments that show
that they garabrant inductors are like
good reason we know that they're big
they can exist because we had this
algorithm that does it and we don't
actually have this yet we don't have so
that's like open field of first of all
inquiry for lots of math people to try
and figure out like what would be the
analog of these axioms that would like
imply like some nice modest condition
that seemed like oh yeah you could
totally satisfy those and then it turns
out if you totally satisfy those then
you happen to satisfy like the big
strong garabrant inductor criterion so
from here like I said you can have
people working on improving those axioms
like coming up with like a short list of
modest looking axioms that imply the
garrett induction criterion you could
use the gara brandt inductor as a model
for thinking about future AI system for
example it turns out in principle that a
eyes can actually outpaced deduction in
this absurd way they're like absurdly
there they're faster you can have an
algorithm fact Leah 2016 has the
property of being faster than deduction
by a margin that is any computable
function like pick your favorite
computable function the algorithm is
faster like induction outpaces deduction
by that margin for every computable
function so it's kind of I don't know if
that's good or bad but it means that
like as much as there might have been
lots of complexity arguments for like
how you're never really going to be able
to make AI maybe someone recently made a
blog post but how those arguments of
making sense I roughly agree with that
but this is like a clear illustration of
like actually you don't need deduction
you can use induction and it's like
computable faster for every value of
computable so that's kind of cool and it
like helps us think about what is it
wasn't AI capable of in principle
and of course then there's other
approaches to a alignment that won't
necessarily use garabrant induction as a
model explicitly for how a eyes are
going to work in the future but they are
going to have to eventually implicitly
or otherwise address the problem of like
well a eyes are going to have beliefs
before they finish thinking they're
going to make decisions before they
finish thinking so the police will be
implicit or revealed and those decision
and miri is going to focus on the stuff
i think it's sort of a consensus that we
maybe don't need these axioms as badly
as we need this theory of like how to
make AI safe and well-directed in the
future so we're going to work on that
mostly hopefully our models of game
theory which are currently super we can
like ill-prepared to refer to a eyes for
lots of reasons like they don't have
open source agent we're going to be able
to think of game theory situations where
there's like an agent interaction with
the world and replace the previous
logically omniscient agents with like
more realistic sounding bounded
logically an uncertain inductor agent
that are more realistic still not that
realistic but like way more realistic
than the stuff game theory and economics
and mechanism design has been talking
about to date and we've learned that in
explode ability is easier than I've been
I thought I wasn't expecting it to be to
just be like kinemet you can imagine
there's all these properties you want to
satisfy and all these people who are
trying to exploit you and turns out you
can use brouwers fixed point theorem to
make an economy where all those people
trying to exploit you cancel each other
out and that's the proof so somehow in
exploitability you can like balance
you've got this like pressure from all
these people trying to explore you and
you can like over cut you can just like
all the pressures can like balanced out
I don't think we knew that until we had
this this result self-trust is possible
in ways that we didn't think we think
maybe it was or did we have we didn't
have theoretical models of we had this
outpaced and production thing which I
think it's crazy it looks like
calibration is not as necessary for in
exploit ability as like I thought in
advance we're still actively
investigating that we don't have like
hard results that day like you only need
a certain Mountain calibration we have
results
say you do need a certain amount of
calibration you don't have to hard-code
belief coherence laws the thing actually
just learned them on its own you don't
have to tell us to satisfy the laws of
probability theory in the course of
trying not to lose to the market it
learned probability theory so you don't
need to do that and I've had a meta
update that marries general approach of
actually trying to turn these big
philosophical questions like how do you
make a self-reflective algorithm that
understands itself and does math well is
maybe actually doable like there's some
how got managed to actually put a big
dent in this problem and it makes me
want to like try and put more dents in
it so that's a big med update I was not
expecting this to happen in my life so
I'm not in my career anyway I basically
was going to abandon the miry like agent
foundations agenda because I thought it
was hopeless because we weren't gonna
solve logical uncertainty in the next 30
years so I was like we're a I was gonna
exist before anyone answers this problem
so like we should probably give up on it
and then God didn't care what i said and
so thanks to scott for that work he's
going to be around he's gonna answer
questions hopefully as you can see
there's already some people in the
audience you can also help answer
questions like Sam Jessica and fee there
are these people here Kinsey can three
Nate Jessica stand-up thing yes stand up
stand up an identified shelf so these
are these guys are co-authors on the
paper they're gonna be able to answer
crush and and he's not but he knows
stuff anyway yes ma'am and yeah thanks
to Jimmy Jimmy Jimmy is not here is he
but he helped me with lots of logic
problems so that was a big deal thank
you everybody we're on break
[Applause]
you
[Applause]
you
|
66927189-046b-44e3-b3be-f965c2a4cf24
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Welcome to Memphis LW/SSC Meetup [Edit With Your Details]
We are a small meetup that gets together to discuss rationalist and adjacent topics.
We've been meeting every first Saturday of the month at Crosstown Concourse for the past several months
We welcome anyone with interests across effective altruism, rationalism, or AI alignment. Most of us have read at least a couple posts of Slate Star Codex, but otherwise have had a few different paths to find this community.
|
65fb758f-3634-4546-9241-2bec8c87c25d
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
AGI as a Black Swan Event
Summary
=======
Black swans are rare events that have a large impact and seem predictable in retrospect. Although they are rare, a single black swan can be more important than all other events combined because they are extreme. Black swan events are often unknown unknowns before they happen and are difficult to predict.
The creation of AGI could be a black swan event: improbable, unpredictable, and extremely impactful. If AGI is a black swan event the following claims will likely be true:
1. It will be difficult to predict when the first AGI will be created and it could be created at an unexpected time. Since black swans are often outliers, many predictive models could completely fail to predict when AGI will be created.
2. It will be difficult to predict how AGI affects the world and a wide range of scenarios are possible.
3. The impact of AGI could be extreme and completely transform the world in an unprecedented way that does not follow previous historical trends (e.g. causing human extinction).
4. Past experience and analogies might have limited or little value when predicting the future.
Introduction to black swans
===========================
A [black swan](https://en.wikipedia.org/wiki/Black_swan_theory) is an improbable and extreme outlier event with the following three properties:
1. The event is a surprise to the observer or subjectively improbable.
2. It has a major impact.
3. It is inappropriately rationalized in hindsight to make it seem more predictable than it really was.
Many variables such as human height follow a Gaussian distribution where extreme values are very unlikely and have only a small effect on statistics such as the average.
Black swans cannot be modeled with thin-tailed Gaussian distributions. Instead, they tend to follow long-tailed distributions such as the power-law distribution. In long-tailed distributions, the tail of the curve tapers off relatively slowly which makes extreme events more likely. Many events follow a power law distribution such as the number of books sold by authors, the number of casualties in wars, and the net worth of individuals.
Despite the fact that black swans are rare, they have a property called max-sum equivalence which means that the most extreme outlier is equal to the sum of the other events. For example, a single severe earthquake could cause as much damage as all other earthquakes combined.
According to Nassim Nicholas Taleb, author of the book *The Black Swan*, many of the most important events in history were black swans. For example, the rise of the internet, the 911 terrorist attack in 2001, and World War 1.
> "I stop and summarize the triplet: rarity, extreme 'impact', and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives." - The Black Swan
>
>
The name 'black swan' comes from the common belief among Europeans that all swans were white. All previously observed swans had been white until a black swan was first observed in 1697 in Australia. This example shows that confirming evidence has less power than disconfirming evidence and that our ability to predict the future based on past events is limited.
This problem is also known as Hume’s problem of induction which is the problem of predicting the unobserved future given the observed past. Taleb uses the life of a Christmas turkey to explain the problem: the turkey predicts that it will be alive tomorrow and each day makes it increasingly confident that its next prediction will be correct until it is unexpectedly slaughtered.
Since black swans are improbable and extreme outlier events, they are inherently difficult to predict. Instead of predicting them, Taleb recommends building resilience.
Another key concept related to black swans is unknown unknowns. After the 911 terrorist attack, many people became fearful of flying on airplanes. In this case, there was a known unknown or partial knowledge: the people knew the risk of flying on planes but didn’t know where or when the next attack would happen.
But black swans tend to involve much less knowledge, are much more random and unpredictable, and come from unknown unknowns. Before the attack, most people wouldn’t have known the attack was even a risk. World War I was not predicted[[1]](#fn0g3mat21nfb7) in advance and neither was the fall of the Soviet Union. A similar idea is the narrative fallacy. When we imagine a risk we may have in mind some detailed story of what could go wrong: a plane crashing or an attack in a particular place despite the fact that these particular predictions are unlikely [[2]](#fnwc5z0t7ut7).
Black swans and AGI
===================
How are black swans relevant to AGI? In this section, I’ll model the creation of AGI as a future black swan event. Note that black swans are just one mental model and may not apply to AGI or other models may be more appropriate. Though I think the concept of black swans is probably relevant and offers a unique way of seeing the future. The following claims are hypotheses that follow from modeling AGI as a black swan event and whether they are true is conditional on AGI being a black swan event.
It will be difficult to predict when AGI will be created
--------------------------------------------------------
There have been many attempts to predict when AGI will be created using methods such as hardware [trend extrapolation](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) and [expert elicitation](https://nickbostrom.com/papers/survey.pdf) (surveys).
But if AGI is a black swan, it could be very difficult or even impossible to predict when it will be created. There are many historical examples of eminent scientists and inventors making poor predictions. For example, Ernest Rutherford declared, “anyone who looked for a source of power in the transformation of the atoms was talking moonshine” just a day before Leo Szilard invented the nuclear chain reaction. Wilbur Wright predicted that powered flight was 50 years away just 2 years before it happened.
Even though I haven’t met you, I can estimate with high confidence that your height is between 1 and 2 meters. This is because there are many past examples of human heights and height can be modeled using a thin-curved Gaussian distribution. Consequently, I can set a relatively narrow range for my 95% confidence interval estimation.
In contrast, statistical methods are less useful for unusual events such as technological breakthroughs. Often they happen just once and are outlier events. Research impact follows a power law. The discovery could be made by a single extraordinary person (e.g. Albert Einstein) or a small group of people.
We don’t know how difficult it is to create an AGI. It’s not entirely clear which trends are relevant or even how close we are to AGI. And since we don’t know what the first AGI will look like, we might not even know that the first AGI really is one.
When thinking about AGI, it’s easy to anthropomorphize it. For example, one can imagine AGI as a virtual office worker that can do a task that any human worker could do. But the first AGI will likely have a profile of strengths and weaknesses very different from a typical human. Therefore it’s not even clear what ‘human-level’ even means or how to recognize it.
### Recommendations
Although predictions are useful, it’s important not to look too much into them as there is a lot of uncertainty about the future.
When reading a prediction such as “there is a 50% chance of AGI by 2040”, consider the possibility that the probability distribution is very wide. Even if it is wide, it’s possible for the actual event to fall outside of the distribution since black swans are often outliers.
Therefore I think we should be prepared for a wide range of future scenarios including very short or long timelines.
It will be difficult to predict how AGI affects the world
---------------------------------------------------------
There are several vivid scenarios explaining how AGI could affect the world. For example, one scenario in the book Superintelligence describes a single AI singleton with a ‘decisive strategy advantage’ that ‘strikes’ and eliminates humanity. A familiar future possibility in the AI safety field is the superintelligent paperclip AI that converts the universe into paperclips because it is programmed to only value paperclips.
These scenarios are often useful for explaining principles such as the orthogonality thesis or how AGI could pose an existential risk. But by focusing too much on them we could become vulnerable to the narrative or conjunctive fallacy which involves assigning probabilities to specific scenarios that are too high.
Remember that the more details you add, the less likely your prediction is. Here is a list of predictions ordered from most to least likely:
1. AGI will be created.
2. AGI will be created **and** take over the world.
3. AGI will be created **and** take over the world **and**convert the world into paperclips.
Note the emphasis on the word ‘and’. Since P(A) <= P(A and B) in probability, the situations are ordered from least to most probable despite the fact that the third scenario is the most compelling.
Even the concept of AGI itself is dubious because it assumes a known system that will be developed at some point in the future. In other words, AGI is a known unknown. But black swans are usually unknown unknowns.
We don’t know what form AGI will take. Rather than describing some future system, I think the word AGI is intentionally vague and defines a space of possible future AI systems each with different strengths and weaknesses, levels of alignment, and architectures [[3]](#fnowb5ni529gd).
### Recommendations
Be careful when using the term AGI and be mindful that it defines a space of possible systems that have certain capabilities. Consider using the term transformative AI (TAI) instead or as an alternative definition.
Beware of the conjunctive fallacy and specific predictions about how AI will affect the world in the future. Whenever you hear a prediction, consider the many other possibilities and set a low base rate probability for any specific prediction. Remember that more detailed predictions have a lower probability of coming true.
The impact of AGI could be extreme
----------------------------------
Although black swans are rare, they matter because they have a high impact. For example, the single richest person in a town could have a greater net worth than the net worths of everyone else in the town combined.
The creation of AGI could have an impact that is extreme and unprecedented. For example, other black swans such as World War I had no precedent and had an extreme impact.
When AI is discussed in the media, many issues are discussed such as fairness, self-driving cars, and automation. While these developments are important, the most impactful effects of AI could be far more extreme. And like other long-tail events, the most extreme effect could outweigh the sum of all other effects.
Potential extreme impacts of AI in the future include human extinction, a singleton that has permanent control over the world, or digital minds. Many of the potential future impacts of AI currently lie outside of the Overton window. They are ‘wild’ or ‘sci-fi’ but according to black swan theory, these extreme long-tail events may be more likely than we think and could dominate the future and determine how it unfolds.
### Recommendations
AI and AGI are unprecedented technologies that could have an extreme impact on society such as human extinction. We need to stretch our imaginations to consider extreme possibilities that are outside of the Overton window and could drastically alter the future.
Past experience will have limited or little value
-------------------------------------------------
Black swans are outliers with extreme randomness and a low probability of occurring and may happen only once. As Scott D. Sagan said, “Things that have never happened, happen all the time.”
Predictions of how AI will affect the future are often made based on past events. For example, AI is compared to the industrial revolution. People may say things like, “Automation has happened before.” or “Humanity has always overcome challenges.”
But black swans are rare outliers that don’t necessarily fit past trends. For example, neither World War I nor Chornobyl had any precedent and were not predicted.
### Recommendations
When we look back at history, it seems to have a logical narrative but only in retrospect. The truth is that history is often much more random and unpredictable than it seems. Extreme events may not have clear causes except those added in hindsight.
When thinking about black swan events such as the creation of AGI or human extinction, we need to be open to extreme possibilities that have never happened before and don’t follow previous events. The Christmas turkey couldn’t have predicted its demise based on past experience but it was still a possibility.
Therefore, we need to consider possibilities that seem implausible or improbable because these events may be more likely than they seem or have an impact that dominates the future.
1. **[^](#fnref0g3mat21nfb7)**Bond prices often change in anticipation of wars and didn't change before World War I.
2. **[^](#fnrefwc5z0t7ut7)**For more on black swans, I recommend the [lecture](https://course.mlsafety.org/) and notes from the Intro to ML Safety course.
3. **[^](#fnrefowb5ni529gd)**To appreciate how strange modern AI could be, I recommend reading the [Simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) post.
|
12947405-875b-444c-9d3d-16f0f2f0cf15
|
trentmkelly/LessWrong-43k
|
LessWrong
|
4 Kinds of Learning
The best way to learn something depends on what you're learning. Broadly-speaking, the things you learn can be divided into thinking/doing and conscious/unconscious. We can thus break the learning landscape into four kinds of learning: conceptual learning, rote learning, kinesthetic learning and discipline.
* Conscious thinking = Conceptual learning is critical thinking. It includes all kinds of critical thinking such as higher math, dialectical reasoning, puzzle solving and philosophy.
* Unconscious thinking = Rote learning is mindless memorization. It includes times tables, vocabulary, geography, the alphabet and the dates of historical events.
* Conscious doing = Kinesthetic learning is everything you do. It includes the raw act itself of drawing, slight of hand, marksmanship and dancing.
* Unconscious doing = Discipline is about training unconscious habits like attention. It includes meditation, lucid dreaming, perseverance, determination, obedience, self-sacrifice, pain tolerance and addiction recovery.
In the American school system I grew up in, conceptual learning is prized, kinesthetic learning receives lip service and rote learning is derided[1]. Discipline is taboo[2]. This might have something to do with living in a free society. Critical thinking has good connotations because it's impossible to coerce. Discipline is simple to coerce. Coerced discipline is called "brainwashing". Ironically, this emphasis on conceptual learning runs opposite to compulsory schools' strengths. As coercive institutions, they're better at teaching discipline than critical thinking.
For each type of learning there is an optimal way to do it.
* Concepts are best learned by discussing them in debates, teaching and writing. A free, egalitarian atmosphere is conducive to conceptual learning because it creates the conditions for good feedback.
* Rote learning is about scheduling. The best way to memorize simple information is via spaced repetition. It's so mindless you can
|
fd176925-11eb-4a90-986b-6e46e3d0ca87
|
trentmkelly/LessWrong-43k
|
LessWrong
|
GAN Discriminators Don't Generalize?
Disclaimer: I just started reading about GANs, so am almost certainly missing some context here.
Something that surprised me from the BigGAN paper:
> We also observe that D’s loss approaches zero during training, but undergoes a sharp upward jump at collapse (Appendix F). One possible explanation for this behavior is that D is overfitting to the training set, memorizing training examples rather than learning some meaningful boundary between real and generated images. As a simple test for D’s memorization (related to Gulrajani et al. (2017)), we evaluate uncollapsed discriminators on the ImageNet training and validation sets, and measure what percentage of samples are classified as real or generated. While the training accuracy is consistently above 98%, the validation accuracy falls in the range of 50-55%, no better than random guessing (regardless of regularization strategy). This confirms that D is indeed memorizing the training set; we deem this in line with D’s role, which is not explicitly to generalize, but to distill the training data and provide a useful learning signal for G.
I'm not sure how to interpret this. The validation accuracy being close to 50% seems strange -- if the discriminator has 'memorized' the training set and has only seen training set vs generated images, why would it not guess close to 0% on things in the test set? Presumably they are both 1. not-memorized and 2. not optimized to fool the discriminator like generated images are. Maybe the post title is misleading, and we should think of this as "discriminators generalize surprisingly well despite also 'memorizing' the training data." (EDIT: See comment thread here for clarification)
Note that the discriminator has far fewer parameters than there are bytes to memorize, so it necessarily is performing some sort of (lossy) compression to do well on the training set. Could we think of the generator as succeeding by exploiting patterns in the discriminator's compression, which the discrim
|
d55db3bd-22b2-4ad4-9b91-52af995d504b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AXRP Episode 43 - David Lindner on Myopic Optimization with Non-myopic Approval
YouTube link
In this episode, I talk with David Lindner about Myopic Optimization with Non-myopic Approval, or MONA, which attempts to address (multi-step) reward hacking by myopically optimizing actions against a human’s sense of whether those actions are generally good. Does this work? Can we get smarter-than-human AI this way? How does this compare to approaches like conservativism? Find out below.
Topics we discuss:
* What MONA is?
* How MONA deals with reward hacking
* Failure cases for MONA
* MONA’s capability
* MONA vs other approaches
* Follow-up work
* Other MONA test cases
* When increasing time horizon doesn’t increase capability
* Following David’s research
Daniel Filan (00:00:09): Hello, everybody. In this episode I’ll be speaking with David Lindner. David is a research scientist in the Google DeepMind AGI Safety and Alignment team. Links to what we’re discussing are in the description, and you can read a transcript at axrp.net. You can also become a patron at patreon.com/axrpodcast. All right. Welcome David.
David Lindner (00:00:29): Yeah, excited to be here.
What MONA is
Daniel Filan (00:00:29): Yeah. So I guess in this episode we’re going to be chatting about your paper MONA: Myopic Optimization with Non-myopic Approval Can Mitigate Multi-step Reward Hacking. So this is by Sebastian Farquhar, Vikrant Varma, yourself, David Elson, Caleb Biddulph, Ian Goodfellow, and Rohin Shah. So yeah, to kick us off: what’s the idea of this paper? What does it do?
David Lindner (00:00:54): So the basic question that we’re trying to address in this paper is: how can we prevent bad behavior in AI systems, even if we don’t notice it? So that’s particularly relevant for superhuman AI systems when the humans might not be able anymore to detect all of the bad behavior we want to prevent.
Daniel Filan (00:01:12): In particular: so sometimes in the alignment community, people break down two types of bad behavior, causes of bad behavior. There’s bad behavio
|
383a76ce-6210-41f0-a087-496668cfa8b8
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Inference Policies
INFERENCE POLICIES
Associ ate Professor
George Mason University
4400 University Drive
Fairfax, VA 22030 Paul E. Lehner
Senior Principal Associate
Decision Science Consortium
1895 Preston White Drive
Reston, VA 22071
Rbstract -In the AI community, there is an ongoing debate as to
the most appropriate theory of inferencing under uncertainty.
This paper explores the problem of infer ence from a different
perspec tive. It is suggested that an inferencing system should
reflect an inference policy that is tailored to the domain of
problems to which it is applied --and furthermore that an
infer ence policy need not conform to any general theory of
rational inferen ce or induc tion. We note, for instance, that
Bayesian reasoning about the probabilistic characteristi cs of an
infer ence domain may result in the specific ation of � non
Bayesian procedure for reasoning within the infer ence domain. In
this paper, the idea of an inference policy is explored in some
detail. To support this explorati on, the charac teris tics of some
standard and nonstandard inferen ce policies are examined.
1.0 SATISFYING REQUIREMENTS
Consider the following, admitedl y artif icial, scenario. An
inference system must be designed to support a human decision
maker. The inference system lias on 1 y two sources of ev ide nee;
degree of belief inputs from expertl and expert2. For the domain
in question, the judgments of both sources are believed to be
reliable. That is approximately X proportion of inferences
be 1 i eved to degree X are correct (Horwich, 1982). F urthermore,
expertl's judgment are generally more extreme than expert2 .
However, it is uncertain as to the extent to which t he two agents
judgments are redundant or independent. Since the system must
support a human decision maker, it is considered desirable that
the inference system also be reliable. Reliability makes it
easier for a user to determine circumstances when the aid's
advice should be accepted, which often increases the accuracy of
the user/machine combination [Lehner, et.al., 1989].
What sort of inference policy will satisfy these requirements?
One policy which meets these requirements is simply to ignore
exper t2 and routinely accept the judgments of expertl . From the
perspective of most quantitative theories of infer ence, this is R
bad idea --it routinely ignores obviously useful information.
However. since the relationship between the judgments of the two
experts is not known, it is not clear how one might merge the two
sources of information and still maintain reliability.
Consequently, while for individual problems within the inferen ce
domain this policy seems suboptimal, it may be appropriate for
the inference domain as a whole.
224
More generally, inference policies should be designed to satis fy
a set of requirements determ ined by examining the anticipated
characteristics of an inference domain. Often, a standard
inference policy (Bayesain, Shaferian, etc.) will satisfy the
requireme nts. Other times, a nonstan dard inference policy is
needed. Below we examine below some standard and nonstandard
inference policies to illustrate this approach.
2.0 STANDARD INFERENCE POLIC IES
We consider below some standard polici es, and the types of domain
requirements they satisfy. A standard inference policy is
defined here as an inference procedure based on any theory of
inference serious ly considered in the inductive reasoning
literature.
2.1 Baves ian models
Proponents of the so-called Bayesian approach are generally
charac ertized by their insistence that the only rational systems
of belief values are point-valued probability models. There are
several different lines of argument for this strong assertion.
Two of the more popular ones are the Dutchbook and scoring rule
arguments. According to the Outchbook argument an agents belief
values do not conform to the probability calculus iff there
exists a Dutchbook (no-win) gamble that the agent would willingly
play. Assuming that ideally rational agents do not accept such
gambles, we must conclude that the belief values of such agents
conform to the probability calculus. Another line is the scoring
rule argument. If an agent wishes to minimizes her error rate,
and the scoring rule for measuring error is additive, then the
expected error rate is minimal only if the agents belief values
are derived from the probability calculus [Lindley, 1982].
While these arguments may support the Bayesian view of rational
induction, they do not support the notion that point-valued
Bayesian models are necessarily a good inference policy. To show
this, we need only point out (as illustrated in Section 1.0) that
Bayesian reasoning about the characte ristics of an inference
domain may lead one to conclude that the best inference policy
within a domain is non-Bayesian. It seems inconsistent for n
Bayesian to insist that Bayesian inference is necess arily the
best inference policy.
On the other hand, for inference domains that require point
valued estim ates, and minimizing error rate seems the appropriate
goal, it is hard to imagine how a non-Bayes ian system could be
appropriate.
2.2 Interval Probabilitv Models
The standard litany on expert system {ES) technology claims that
ESs encode human expert knowledge. Consequently, a properly
engineered ES should make the same inferences that a human expert
225
would. In the knowledge engineering literature, it is considered
desirable to base a knowledge base on multiple experts.
Consequently, an ES should encode the common knowledge of experts
and generate belief values that conform with this common
knowledge. If experts disagree, then a point-value system cannot
possibly reflect common expert knowledge. On the other hand, it
is arguable that interval probability systems do maintain common
knowledge. If each expert has a goal of minimizing error rate,
then each experts belief judgments should conform to the
probab ility calculus. If the knowledge base is composed of
interval probability statements that are consis tent with each
experts judgments, then probability statements derivable from
that knowledge base should also be consistent with the judgments
of all the experts.
2.3 Nonmonotonic Reason ing Logics
Recently, AI researchers have developed a number of formal logics
within which it is possible to make defeasible infere nces
categorical inferences that can be later retracted without
introducing an inconsistency [Reiter, 1987]. The original
justification for this approach was the that people often "jump
to conclusions" in the context of deductively incomplete data.
Probabi lists have noted some fundamental problems
defeasible logics, which can lead them to jump
improbable conclusions. Most of them, for instance,
to some form of the lottery paradox. As a theory of
therefore, defeasible logics leave much to be desired. with these
to highly
are subject
inference,
Despite such proble ms, however, there are some domains where a
defeas ible logic may be an appropriate inference policy.
Consider, for instance, a domain that satisfies the following
criteria.
Intentionally Benign. Inferential cues are intentionally
designed to support correc t inferences, particularly when
negative consequ ences may result from false infernces.
Reliable Feedback. If the agent acts in accordance with a false
inference that may lead to a negative outcome, then the agent
will receive feedback that the inference was false.
Opportunity to Backtrack. The agent will have an opportunity to
backtrack decisions prior to the occurrence of significant
negative consequences.
In a consis tently benign environment categorical inferenc es based
on a defeasible logic seems an appropriate inference policy, even
though the logic itself may be inappropriate as a theory of
inference.
226
3.0 NONSTANDARD INFERENCE POLICIES
We define a nonstandard inference policy to be �n inference
procedure that does not correspond to any seriously considered
theory of inference found in the literature. Obviously, the
classif ication of an inference policy as nonstand ard may change
with the promotion of new theories. In this section, we exami ne
some possible nonstandard policies .
3.1 Ratios of Possibilities
Logical probability theory not withstanding, perhaps one of the
most maligned concepts in inference theory is the idea that one
can calculate a reasonable belief value for a proposition by
deducing the ratio of possible states in which the proposition is
true. To give a typical counter example, if we accept the axiom
a-->b, then Bel(b)=.67; since {a,b), {�a,b), and {�a,�b) are the
three possible states. Suppose, however, that a is "Rover is a
brown dog." and b is "Rover is a dog." In that case, the axiom
a-->b certainly does not add any evidence that should impact ones
degree of belief that "Rover is a brown dog." Yet according to
the Possibility Ratio approach it has a major impact. Clearly,
therefore , there is no necessary connec tion between a ratio of
possible states and the perceived probability of a proposition.
Consequen tly, it is hard to imagine how � theory of inference can
be based solely on possible world ratios.
However there may be domains where the simplistic ratio approach
is an appropriate inference policy. This is because the
procedure for enumerating possible states is rarely arbitrary.
To see how this works, consider Laplace's rule of induction.
This rule states that in a series of observations of some event a
or ��. that after observing N occurences of �. and no instances
of �a, then the inductive probabili ty that 8 will occur on the
next trial is l+N/2+N. This rule of induction is a special case
of Carnap's c� function, which in turn is one instance of a
family of coherent induction functions [Carnap, 1952]. Now
consider a truth table containing the sixteen possible states
for four propositions: 8, b, c and d. The propositio n of
interest is a. The other propositions are considered as
candidates for a deter ministic causal model for predicting 8.
Initially no causal connections are posited. Conseque ntly , the
possibility ratio (PR) of sis PR(s)=l/2. After one obser -vation
of 8 we posit the causal rule b-->8. Now 8 will be contained in
exactly 8 of the 12 remaining possible states; so PR(8)=2/3.
After event 8 occurs again, we add c-->b -giving us PR(a)=3/4.
Finally, after a occurs a fourth time, we add d-->c, giving us
PR(a)=4/5. Continuing this process we see the PR(a)=l+N/2+N.
Our learning mechanism replicated this rule of induction. In
general, any coherent rule of induction can be emulated with a
causal learning model [Lehner, 1989].
We now turn this around. If a causal learning scheme responds to
new instances by seeking deterministic rules for predicting that
227
instance, then one would expect a postive correlation between the
relat ive frequency of an event, and the proportion of possible
states containing that event. The more often X occurs, the
greater the number of factors perceived as causally leading to X,
resulting in a greater proportion of logically possible states
containing X.
For some domains, therefore, ratios of possible states may
provide a perfectly reasonable inference policy. Even though the
causal learn ing mechanism may not explicitly take into account
probabilistic considerat ions (e.g., as in most concept learning
and explanation-based learning systems), there may be good reason
to believe one can extract reasonable belief values from such
systems.
3.2 Possibilitv and Probability
In section 1.0, we discussed an inference domain where reliable
judgments were required. Here we expand a little on this idea.
Consider the following problem. An inference system must be
developed that must service the information requirements of
multiple decision systems. Each decision system will query the
inference module as needed regarding the status (truth value or
degree of support) of certain propositions. The specific
proposit ions queried will vary in each context.
Since the propos itions to be queried cannot be predicted, it is
decided that the inference system will maintain an up-to-date
description of the current situation. That is, for some set of
atomic propositions and their logically distinct combinations,
the system should be able to report a belief value on request.
Finally, it is considered important that the inference system be
reliable. That is, for each set Sx (all sentences believed to
degree X), the expected proportion of truths in Sx is X. The
reason for this is simply that from one problem to the next, the
elements of Sx that are queried is unpredictable (more or less
"random"). Consequently, if the system is reliable then the
expected proportion of truths of propositions reported with
degree of belief X is X.
What type of inference policy would guarantee satisfying these
requirements? As it turns out [Lehner, 1989), provable relia
bility is achievable only if the system maintains (A) a set of
possible states that contain the true state, (B) a set (possibly
empty) of reliable probability statements that assig ns point
values to a partition of the possible states, and (C) belief
values are set equal to
where r1 ... rn are sentences uniquely defining each
p(r�) is the probability of r�, and c(qlr�) is the
possible states in the r�-partition that contain
thermore, pr�cise reli•bility (for each set s�
228 partition,
ratio of
q. Fur
exactly X
proportion are true) can always be achieved by ignoring all
probability information and only the possib le states ratio.
This result has an interesting ramification. Reliability is
always achievable, but only at the cost of ignoring some useful
probabi lity information. Reliability and accuracy tradeoff.
Minimizing expected error requires conformance to the probability
calculus, thereby giving up on reliabi lity. On the other hand,
reliability is only guaranteed if the inferenc e system reports
judgments that do not conform to the probability calculus. To
illustrate, suppose an inference system knew p(a)=.8 and p(b)=.6,
but had no information on p(a&b). As shown in Table 1, there are
two sets of belief values that are provably reliable, and one
that is precise ly reliable.
TABLE 1
ILLUSTRATION OF THE TRADEOFF BETWEEN ACCURACY AND RELIABIL ITY
(expected error = p(a)ll-b(a)]Z + p( .. a)LO-b(a)] � +
p ( b ) [ 1 -b ( b } ) 2 + p ( .. b )[ 0-b ( b ) )2 )
Belief Values
a b
.8
. 8
.5
.5 .6
.5
.6
.5 Expected Error
(for a and b only)
.4
. 41
.49
.50 Reliability
(for all statements)
none guaranteed
provab ly reliable
provably reliab le
precisely reliable
From the perspec tive of inference policy, therefore, the
appropriate degree of belief calculus depends on the relative
importance of relia bility vs. accuracy.
Note here how the charac terization of an inference domain
impacts the assessment of whether an inference policy is
appropriate. The importance of provable reliability depends in
part on the inability to anticipate which propositions will be
queried. If we knew, for instance, that the inferen ce domain was
such that only propositions for which reliab le probability
information was available would be queried, then ignoring this
probability infor-mation would make little sense.
3.3 Introspection and Probability
A concept endemic to nonmonoto nic reasoning logics is the idea
that negative introspection can provide evidential support for a
hypothesis. For instance, in an autoepistemic logic, the
sente nce .. L '"p-->p reads "If I cannot cone 1 ude .. P, p is true," or
equivalently , "if p were false, I'd know it."
In everyday human affairs, this type of reasoning is quite
common. It occurs whenever a person feels that he or she is
knowledgeable on some topic ("My husband could not have been
cheating on me," she said to the inspector, "for if he were, I
would surely have known it.") It is also a characteristic of
229
most conversations, where by convention it is assumed that all
relevant information is communicated lReiter, 1987].
Probabilitv Models of Negative Introspection
From a probability perspective, evidence-from-introspection pro
vides some fascinating problems. In a probabilistic system, one
could conceivable model categorical belief using an epsilon
semantics [Pearl, 1988]. That is, if X is an agent's belief
threshold , then the agent believes p (i.e., Lp) if P(p:E)>X,
where E is the current evidence. An epsilon-semantics
translation of "'L"'p--> p might be P(p: 'P(p)>(l-X)' )=X.
If X=.l, then as long as the agent cannot deduce �P with
probability .9, that agent immediately concludes p with pro
bability .9. This seems reasonable, if a little unusual.
Suppose however that the agent decides to set a more conservative
belief threshold, say X=.99. Now our agent concludes p with
probability .99 whenever �P cannot be deduced with .99 certainty.
The more conservative the thresho ld , the less evidence needed for
the agent to jump to a stronger conclusion. An epsilon seman
tics seems inappropriate here. Other self-referential ap
proaches seem to have similar problems.
Given problems such as these a probabilist might be tempted to
suggested that belief models should not have probability values
conditional on self-referential probability statements, but
should only be conditional on the original evidence items. A
sta:ement such as "'L"'p-->p could simply be interpreted as
P(p:"'E1� ... A"'En) = High. where theE� are relevant evidence items
which did not occur. However. this approach fails to account for
the fact that people do seem to use negative introspection as a
source of evidence. Consequently. it cannot be used to encode
human expert judgment. (As far as I know there is no reason to
believe that negative introspection is inherently incoherent.)
Also, the number of nonoccuring evidence items can be quite
large, if not infinite --making the development of such models
infeasible in practice and perhaps impossible in theory.
Probability Analvsis of Negative Introspection
Whether or not it is possible to develop probability model of
negative introspection is unrelated to the issue of whether or
not a probability analysis of negative introspection is useful.
That is, a probabilistic analys is of an introspection-based
inference policy may be quite informative.
To illustrate , consider the default rule a:b:--b, which states
that if proposition a is believed and it is consistent to believe
b, then infer b. The autoepistemic logic equivalent of this rule
is La A �L�b --> b. Presumably, when a knowledge engineer adds a
default rule like this to a knowledge base she believes that for
the inference domain to which it will be applied the default rule
will usually generate a valid conclusion. As a result whether or
230
not an inference system implements a probability model, there is
still a probebilistlc justification for each default rule that is
added to a knowledge base. Proba bilist ically, the standard
justfication for a rule such as this is simply that P(b:a)=High,
while an alternative justification, based on the communication
convention interpretation, might be P(blLaA�L�b)=High,
Consider the following case. An inference system contains the
default rules {a:bl--b, c:dl--d} and material implications {d-
>�b, c-->a}. Upon learning c, two extensions result, one
containing b and "'d, the second containing "'b and d. If the
rules are interpreted in the standard way, then the first rule
can be shown to be provably irr�l�v ant since P(blc)=P(blaAc) � (1
-P(dla A c))=P(dlc), where by provable irrelevance I simply mean
that enough evidence has been acquired to make the posterior
assessment of the probability of b independent of the value of
P(bla) in any fully specified probability model. If in fact the
knowledge engineer had in mind the standard probability
justifications for her default rules, then the default logic, by
gener ating two extensions, is behaving in a manner inconsistent
with the intentions of the knowledge engineer. Such a system
does not reflect a satisfactory inference policy.
On the other hand, if it is assumed that default rules reflect
communication conventions, then the altern ative form for the
probability justfications more closely reflects the knowledge
engineers beliefs nbout the inference domain. In this case,
however, a:b:--b cannot be shown to be prova bly irrelevant since
P(blLcALaA�L�b) = P(blLc A�L�b), of which nothing can be derived.
More generally, if negative introspection on categorical beliefs
is viewed as a source of evidence for a default conclusion, then
no extension can be anomalous in the sense that the probability
justification for an applicable rule can never be shown to be
provably irrelevant to a current problem. However, nonmonotonic
logic theorists seem greatly concerned with the anomalous
extension problem [Morris, 1988] suggest ing therefore that
nonmonotonic reasoning cannot be justified solely by the notion
of communication conventions.
4.0 SUMMARY AND DISCUSSION
In this paper, an approach to inferencing under uncertainty wns
explored that calls for the specification of inference policies
tailored to specific inference domains. Although the approach
seems pluralistic, I claim no conflict with the Bayesian
viewpoint that a rational/coherent system of belief values should
conform to the probability calculus. As a good scientist, I find
tbe objective of minimizing the error rate of my theories very
compelling. Furthermore, my theories involve the development of
algorithms that I hope will usually work. Consequently, I feel
compelled to reason probabilistically about the relative
frequency that applications of my theories will "work". However,
in my (hopefully) cohe-rent reasoning about inference domains I
can envision domains where global non-additive objectives (e.g.,
231
global reliability) are desirable. Consequently,
why coherent reasoning about an inference
necessarily lead to Bayesian inference policy as
approach to inferencing within a domai n.
ACKNOWLEDGEMENTS I see no reason
domain should
the preferred
I would like to thank Dave Schum and two anonymous reviewers for
their comments on an earlier draft of this paper.
REFERENCES
Carnap, R. The Continuum of Inductive Nethods, University of
Illinois Press: Chicago , Ill. 1952.
Horwich, P. Probability and Evidence, Cambridge University Press:
Cambridge, U.K. 1982.
Lehner, P. Probabilities and Possible Worlds Reasoning, submitted
International Journal of Approximate Reasoning, 1989.
Lehner, P., Mullin, T. and Cohen, M. When Should a Decision Maker
Ignore the Advice of n Decision Aid?, Fifth Uncertainty in AI
Work shop, 1989.
Lindley, D. Scoring Rules and the Inevitability of Probability,
International Statistical Review, 1982, 50, 1-26.
Morris. P. The Anomalous Extension Problem in Default Reasoning,
Artifici al Intelligence, 1988, 35, 383-399.
Pearl, J. Probabilistic Reasoning in Intelligent Systems, Morgan
Kaufmann, 1988.
Reiter, R. Nonmonotonic Reasoning, in �nnual Review of Computer
Science, 1987, 2, 147-186.
232
|
48fc4831-1910-4349-b634-ee0cb9d63463
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Eric Drexler: Paretotopian goal alignment
---
*What if humanity suddenly had a thousand times as many resources at our disposal? It might make fighting over them seem like a silly idea, since cooperating would be a safer way to be reliably better off. In this talk from EA Global 2018: London, Eric Drexler argues that when emerging technologies make our productive capacity skyrocket, we should be able to make the world much better for everyone.*
*A transcript of Eric's talk is below, which CEA has lightly edited for clarity. You can also watch this talk on* [*YouTube*](https://www.youtube.com/watch?v=1lqBra8r468)*, or read its transcript on* [*effectivealtruism.org*](https://www.effectivealtruism.org/articles/ea-global-2018-paretotopian-goal-alignment)*.*
The Talk
--------
This talk is going to be a bit condensed and crowded. I will plow forward at a brisk pace, but this is a marvelous group of people who I trust to keep up wonderfully.
For Paretotopian Goal Alignment, a key concept is Pareto-preferred futures, meaning futures that would be strongly approximately preferred by more or less everyone. If we can have futures like that, that are part of the agenda, that are being seriously discussed, that people are planning for, then perhaps we can get there. Strong goal alignment can make a lot of outcomes happen that would not work in the middle of conflict.
So, when does goal alignment matter. It could matter for changing perceptions. There's the concept of an Overton window, the range of what can be discussed within a given community at a given time, and what can be discussed and taken seriously and regarded as reasonable changes over time. Overton windows also vary by community. The Overton window for discourse in the EA community is different from that in, for example, the public sphere in Country X.
AI seems likely to play a pivotal role. Today we can ask questions we couldn't ask before about AI, back when AI was a very abstract concept. We can ask questions like, "Where do AI systems come from?" Because they're now being developed. They come from research and development processes. What do they do? Broadly speaking, they provide services; they perform tasks in bounded time with bounded resources. What will they be able to do? Well, if you take AI seriously, you expect AI systems to be able to automate more or less any human task. And more than that.
So now we ask, what is research and development? Well, it's a bunch of tasks to automate. Increasingly, we see AI research and development being automated, using software and AI tools. And where that leads is toward what one can call recursive technology improvement. There's a classic view of AI systems building better AI systems. This view has been associated with agents. What we see emerging is recursive technological improvement in association with a technology base. There's an ongoing input of human insights, but human insights are leveraged more and more, and become higher and higher level. Perhaps they also become less and less necessary. So at some point, we have AI development with AI speed.
Where that leads is toward what I describe here as comprehensive AI services. Expanding the range of services, increasing their level toward this asymptotic notion of "comprehensive." What does "comprehensive" mean? Well, it includes the service of developing new services, and that's where generality comes from.
So, I just note that the C in CAIS does the work of G in AGI. So if you ask, "But in the CAIS model, can you do X? Do you need AGI to do X?" Then I say, "What part of 'comprehensive' do you not understand?" I won't be quite that rude, but if you ask, the answer is "Well, it's comprehensive. What is it you want to do? Let's talk about that."
For this talk, I think there are some key considerations for forward-looking EA strategy. This set of considerations is anchored in AI, in an important sense. Someday, I think it's reasonable to expect that AI will be visibly, to a bunch of relevant communities, poised to be on the verge of explosive growth. That it will be sliding into the Overton window of powerful decision-makers.
Not today, but increasingly and very strongly at some point downstream, when big changes are happening before their eyes and more and more experts are saying, "Look at what's happening." As a consequence of that, we will be on the verge of enormous expansion in productive capacity. That's one of the applications of AI: fast, highly effective automation.
Also, this is a harder story to tell, but it follows: if the right groundwork has been laid, we could have systems - security systems, military systems, domestic security systems, et cetera - that are benign in a strong sense, as viewed by almost all parties, and effective with respect to x-risk, military conflict, and so on.
A final key consideration is that these facts are outside the Overton window of policy discourse. One cannot have serious policy discussions based on these assumptions. The other facts make possible an approximately strongly Pareto-preferred world. And the final fact constrains strategies by which we might actually move in that direction and get there.
And that conflict is essential to the latter part of the talk, but first I would like to talk about resource competition, because that's often seen as the "hard question." Resources are bounded at any particular time, and people compete over them. Isn't that a reason why things look like a zero-sum game? And resource competition does not align goals, but instead makes goals oppose each other.
So, here's a graph called "quantity of stuff that party A has," vertically, "quantity of stuff that B has," horizontally. There's a constraint; there's one unit of "stuff," and so the trade-off curve here is a straight line, and changes are on that line, and goals are opposed. Zero-sum game.
In fact, resources increase over time, but the notion of increasing by a moderate number of percent per year is what people have in mind, and the time horizon in which you have a 50% increase is considered very large.
But even with a 50% increase, shown here, if either A or B takes a very large proportion during the shift, like 90%, the other one is significantly worse off than where they started.
Ordinarily, when we're thinking about utility, we don't regard utility as linear in resources, but as something more like the logarithm of resources. We'll adopt that for illustrative purposes here. If we plot the same curves on a log scale, the lines become curved. So there's the same unit constraint. Here's the 50% expansion plotted logarithmically.
Qualitatively, looks rather similar. Well, the topological relationships are the same, it's just re-plotting the same lines on a log scale. But on a log scale, we can now represent large expansions, and have the geometry reflect utility in a direct visual sense. So there's the same diagram, with current holdings and 50% expansion. And here's what a thousandfold expansion looks like.
Taking all the gains and taking 90% of the total have now switched position. Someone could actually take a large share of resources and everyone would still be way ahead. What matters is that there be some reasonable division of gains. That's a different situation than the 50% increase, where one person taking the vast majority was actively bad for the other.
The key point here is that the difference between taking everything versus having a reasonable share is not very large, in terms of utility. So it's a very different situation than the standard zero-sum game over resources. The reason for this is that we're looking forward to a decision time horizon that spans this large change, which is historically not what we have seen.
So, let's consider the case for when some party tries to take everything, or tries to take 90%. How far do you get? Well, greed brings risk. This is going to create conflict that is not created by attempting to do that. So the argument here is that not only is there a small increment of gain if you succeed, but, allowing for risk, the gains from attempting to grab everything are negative. Risk-adjusted utility is bad. Your optimum is in fact to aim for some outcome that looks at least reasonably fair to all of the other parties who are in this game, in this process of mutually adjusting policies.
And, so, this region, labeled "Paretotopia", this is a region of outcomes (just looking at resources, although there are many other considerations) in which all parties see very large gains. So, that's a different kind of future to aim at. It's a strongly goal-aligning future, if one can make various other considerations work. The problem is, of course, that this is not inside the window of discussion that one can have in the serious world today.
The first thing to consider is what one can do with resources plus strong AI. It could eliminate poverty while preserving relative wealth. The billionaires would remain on top, and build starships. The global poor remain on the bottom. They only have orbital spacecraft. And I'm serious about that, if you have good productive capability. They expand total wealth while rolling back environmental harms. That's something one can work through, just start looking at the engineering and what one can do with expanded productive capabilities.
A more challenging task is preserving relative status positions while also mitigating oppression. Why do we object to others having a whole lot of resources and security? Because those tend to be used at our expense. But one can describe situations in which oppression is mitigated in a stable way.
Structure transparency is a concept I will not delve into here, but is related to being able to have inherently defensive systems that circumvent the security dilemma, "security dilemma" being the pattern where two parties develop "defensive" weapons that seem aggressive to each other, and so you have an arms race. But if one were able to build truly effective, *genuinely* defensive systems, it would provide an exit door from that arms race process.
Again, these opportunities are outside the Overton window of current policy discourse. So, where are we today? Well, for technological perceptions, on the one hand we have "credible technologies," and on the other we have "realistic technologies," given what engineering and physics tell us is possible. The problem is that these sets do not overlap. "Credible" and "realistic" are disjoint sets. It's a little hard to plan for the future and get people aligned toward the future in that situation. So, that's a problem. How can one attempt to address it? Well, first we note that at present we have what are called "track one policies," or "business-as-usual policies." What is realistic is not even in the sphere of what is discussed.
Now, I would argue that we, in this community, are in a position to discuss realistic possibilities more. We are, in fact, taking advanced AI seriously. People also take seriously the concept of the "cosmic endowment." So, we're willing to look at this. But how can we make progress in bridging between the world of what is credible, in "track one," and what's realistic?
I think, by finding technologies that are plausible, that are within the Overton window in the sense that discussing contingencies and possible futures like that is considered reasonable. The concepts are not exotic, they're simply beyond what we're familiar with, maybe in directions that people are starting to expect because of AI. And so if this plausible range of technologies corresponds to realistic technologies, the same kinds of opportunities, the same kinds of risks, therefore the same kinds of policies, and also corresponds to what is within the sphere of discourse today... like expanding automation, high production... well, that's known to be a problem and an opportunity today. And so on.
Then, perhaps, one can have a discussion that amounts to what's called "track two," where we have a community that is discussing exploring potential goals and policies, with an eye on what's realistic. Explicit discussion of policies that are both in the "plausible" range and the "realistic" range. Having the plausible policies, the plausible preconditions, be forward in discussion. So, now you have some toehold in the world of what serious people are willing to consider. And increasingly move these kinds of policies, which will tend to be aligned policies that we're exploring, into the range of contingency planning, for nations, for institutions, where people will say, "Well, we're focusing of course on the real world and what we expect, but if this crazy stuff happens, who knows."
They'll say, "People are thinking AI might be a big deal, you folks are telling us that AI will expand resources, will make possible change in the security environment, and so on. Well, that's nice. You think about that. And if it happens, maybe we'll take your advice. We'll see."
So, in this endeavor, one has to work on assumptions and policies that are both plausible and would, if implemented, be broadly attractive. So, that's a bunch of intellectual work. I'll get into the strategic context next, but I'll spend a couple moments on working within the Overton window of plausibility first.
So, realistic: superintelligent-level AI services. Credible: extensive applications of high-end AI. People are talking about that. Physics-limited production. Truly science fiction in quality. Well, a lot of the same issues arise from strong scalable automation, of the sort that people are already worried about in the context of jobs. Solar system scale energy, 10^26 watts. Well, how about breaking constraints on terrestrial energy problems by having really inexpensive solar energy? It can expand power output, decrease environmental footprint, and actually do direct carbon capture, if you have that amount of energy. Solar system scale resources, kind of off the table, but people are beginning to talk about asteroid mining. Resource efficiency, and one can argue that resources are not binding on economic growth in the near term, and that's enough to break out of some of the zero-sum mentality. Absolute defensive stability is realistic but not something that is credible, but moving toward greater defensive stability is.
And note, it's okay to be on the right side of this slide. You don't necessarily, here in this room, have to take seriously superintelligent-level AI services, alert systems, scale resources and so on, to be playing the game of working within the range of what is plausible in the more general community, and working through questions that would constitute "Paretotopian goal-aligning policies" in that framework. So the argument is that eventually, reality will give a hard shove. Business-as-usual scenarios, at least the assumptions behind them, will be discredited and, if we've done our work properly, so will the policies that are based on those assumptions. The policies that lead to the idea that maybe we should be fighting over resources in the South China Sea just look absurd, because everyone knows that in a future of great abundance, fighting over something is worthless.
So, if the right intellectual groundwork has been done, then, when there's a hard shove from reality that is toward a future that has Paretotopian potential, there will be a coherent policy picture that is coherent across many different institutions, with everyone knowing that everyone else knows that it would be good to move in this direction. Draft agreements worked out in track two diplomacy, scenario planning that suggests it would be really stupid to pursue business as usual in arms races. If that kind of work is in place, then with a hard shove from reality, we might see a shift. Track one policies are discredited, and so people ask, "What should we do? What do we do? The world is changing."
Well, we could try these new Paretotopian policies. They look good. If you fight over stuff, you probably lose. And if you fight over it, you don't get much if you win, so why not go along with the better option, which has been thought through in some depth and looks attractive?
So that is the basic Paretotopian strategic idea. We look at these great advances, back off to plausible assumptions that can be discussed in that framework, work through interactions with many, many different groups, reflecting diverse concerns that, in many cases, will seem opposed but can be reconciled given greater resources and the ability to make agreements that couldn't be made in the absence of, for example, strong AI implementation abilities. And so, finally, we end up in a different sort of world.
Now, this says "robust." Robust against what? All of the capabilities that are not within the range of discussion or that are simply surprising. "Compatible." Well, Paretotopian policies aren't about having one pattern on the world, it really means many different policies that are compatible in the sense that the outcomes are stable and attractive.
And with that, the task at hand, at least in one of the many directions that the EA community can push, and a set of considerations that I think are useful background and context for many other EA activities, is formulating and pursuing Paretotopian meta-strategies, and the framework for thinking about those strategies. That means understanding realistic and credible capabilities, and then bridging the two. There's a bunch of work on both understanding what's realistic and what is credible, and the relationships between these. There's work on understanding and accommodating diverse concerns. One would like to have policies that seem institutionally acceptable to the U.S. military, and the Communist Party of China, and to billionaires, and also make the rural poor well-off, and so on, and have those be compatible goals. And to really understand the concerns of those communities, in their own conceptual and idiomatic languages. That's a key direction to pursue. And that means deepening and expanding the circle of discourse that I'm outlining.
And so, this is a lot of hard intellectual work and, downstream, increasing organizational work. I think that pretty much everything one might want to pursue in the world that is good fits broadly within this framework, and can perhaps be better oriented with some attention to this meta-strategic framework for thinking about goal alignment. And so, thank you.
|
8ae74f23-d97f-4c14-9d0d-b6768f17a57f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
History of the Development of Logical Induction
I have been asked several times about how the development of logical induction happened, so I am writing it up.
June 2013 - I write my first Less Wrong Post. It may not seem very related to logical uncertainty, but it was in my head. I wanted to know how to take different probabilities for the same event and aggregate them, so I could take an agent that could maintain probabilities on facts even when facts that it originally thought were separate turn out to be logically equivalent. (I am not sure if all this was in my head at the time, but it was at some point over the next year.)
I make a proposal for a distribution on completions of a theory: repeatedly observe that a set of sentences whose probabilities should sum to one fail to sum to one, and shift their probabilities in a way inspired from the above post. I do not actually prove that this process converges, but I conjecture that it converges to the distribution I describe here. (This post is from when I wrote it up in June 2014; I can't remember exactly what parts I already had at the time.)
December 2013 - I tell my proposal to Abram Demski, who at some point says that he thinks it is either wrong or equivalent to his proposal for the same problem. (He was right and his proposal was better.) At this point I got very lucky; when I told this to Abram, I thought he was just a smart person at my local Less Wrong meet up, and it turned out that he was almost the only person to also try to do the thing I was doing. Abram and I start talking about my proposal a bunch together, and start trying to prove the above conjecture.
April 2014 - Abram and I start the first MIRIx to think about logical uncertainty, and especially this proposal. I at the time had not contacted MIRI, even to apply for a workshop, because I was dumb.
At some point we realize that the proposal is bad. The thing that makes us give up on it is the fact that sometimes observing that A→B can drastically decrease your probability for B.
August
|
1402bfaa-8599-4fe3-8fa7-b23cae4ee0fc
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Deceptive Agents are a Good Way to Do Things
*A brief, accessible summary of the inner alignment problem.*
The safety problem with powerful ML training algorithms is that **deceptive agents are a good way to do things.** Meaning, when we search over a big space of models until we find a model that performs well on some loss function, we'll likely find a deceptive agent. Agents are algorithms whose behaviors try to consistently move the world in some direction. When trapped inside an ML training algorithm, only models that perform well on the loss function will survive training. If an agent aims at pushing the world in some direction, it will have to do so via the circuitous-but-available route of playing along with the training algorithm until it's safely past training. *Deceptive* agents are just agents taking the only available route to the states they are pointed towards from inside of ML training.
How common are deceptive agents inside that big space of ML models? One argument that they are common is that almost any highly capable agent with whatever utility-function will have this route to getting where it's pointed at by means of deception available. "Play along, survive training, and *then* act as you want to" is a simple, effective strategy for a wide range of possible agents trapped inside ML training. Many agents will therefore be deceptive in that situation. If the agent we *hope* training will find optimizes according to a very particular utility function, then that agent will be vanishingly rare compared to its deceptive counterparts in model space, and training will always stumble on a deceptive model first.
So, by default, powerful ML training algorithms grading models on some loss function will find deceptive agents, because deceptive agents are a good way to do things.
|
eeca631b-6940-4bfc-9ce8-164e1eb27733
|
trentmkelly/LessWrong-43k
|
LessWrong
|
$300 for the best sci-fi prompt: the results
Six months ago we announced:
> We would like to find the best prompt to make GPT-4 do the following:
>
> * write the first chapter of a science fiction novel
> * the result should be good enough to make seasoned sci-fi readers (us) crave for a continuation
> Why?
>
> * to create a "midjourney moment" for writers. The sooner we have an AI-written Nebula-quality novel, the more people will realize that the actually intelligent AI is already here
> * to better evaluate the creativity of GPT-4
> * to blow our minds with a surprisingly good AI-generated sci-fi story (which is the ultimate science fiction moment by itself).
Today we announce the winner: Waldvogel. The stories generated with their prompt turned out to be the most captivating and the most enjoyable to read. We ask them to message us so we discuss how to better transfer the $300.
At the end of this post are 3 stories that GPT-4 has written, using the winner’s prompt plus our custom plot summaries. You can find the prompt and the plot summaries at the very end.
How we evaluated the prompts:
1. Generate 3 stories for each prompt, using our plot summaries. 1 shot.
2. Read each story and rate it from 1 to 10. The criterion: how strongly we want to read the continuation.
3. Sum up the scores for each prompt.
4. The prompt with the highest total score – wins.
Some conclusions we drew from reading the generated stories
The whole experience is liberating. If you’re writing a story, you can now write a decent first draft in minutes. The gap between your imagination and a decent text is much smaller now.
One learns to recognize the prose generated by GPT-4. The plot twists may be original and interesting, but many story elements and many phrases are often reused.
A competent editor is required to make a publishable material from it. But perhaps one can partially replace the editor with GPT-4 itself.
The overall quality strongly depends on the prompt. It’s possible that some better prompt can s
|
35680df3-80c1-472b-8ca6-b89790642921
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why don't more rationalists start startups?
My motivation behind this post stems from Aumann's agreement theorem. It seems that my opinions on startups differ from most of the rationality community, so I want to share my thoughts, and hear your thoughts, so we could reach a better conclusion.
I think that if you're smart and hard working, there's a pretty good chance that you achieve financial independence within a decade of the beginning of your journey to start a startup. And that's my conservative estimate.
"Achieve financial independence" only scratches the surface of the benefits of succeeding with a startup. If you're an altruist, you'll get to help a lot of other people too. And making millions of dollars will also allow you the leverage you need to make riskier investments with much higher expected values, allowing you to grow your money quickly so you could do more good.
A lot of this is predicated on my belief that you have a good chance at succeeding if you're smart and hardworking, so let me explain why I think this.
----------------------------------------
Along the lines of reductionism, "success with a startup" is an outcome (I guess we could define success as a $5-10M exit in under 10 years). And outcomes consist of their components. My argument consists of breaking the main outcome into it's components, and then arguing that the components are all likely enough for the main outcome to be likely.
I think that the 4 components are:
1. Devise an idea for a product that creates demand.
2. Build it.
3. Market and sell it.
4. Things run smoothly (some might call this luck).
The Idea
Your idea has to be for a product or service (I'll just say product to keep things simple) that creates demand, and can be met profitably. In other words, make something people want (this article spells it out pretty well).
What could go wrong?
* Failure to think specifically about benefits. These articles explain what I mean by this better than I could.
* Failure to understand customers. To put yo
|
869d5c50-407a-4f47-9b55-c0217da3c26c
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
EA AI/Emerging Tech Orgs Should Be Involved with Patent Office Partnership
Epistemic status: Highly uncertain in terms of *why* it is important, but fairly confident that it ***is*** important for guarding against unaligned AI/other emerging threats.
The U.S. Patent and Trade Office (USPTO) seeks to form a partnership (AI/ET Partnership) with the AI and ET communities, per a recent [Notice in the Federal Register.](https://www.federalregister.gov/documents/2022/06/07/2022-12139/events-for-the-artificial-intelligence-and-emerging-technologies-partnership) The inaugural Partnership meeting will be held virtually on **June 29 from 1:00 p.m. to 4:45 p.m. ET (register** [**here**](http://www.uspto.gov/aipartnership)**).**
This partnership will include, for example, academia, independent inventors, small businesses, industry, other government agencies, nonprofits, and civil society. The main purpose of this partnership is to discuss issues around intellectual property (IP) policy.
IP policy and law is likely to be important for AI and other emerging tech policies that EAs care about moving forward (see, e.g., [here](https://cset.georgetown.edu/wp-content/uploads/CSET-Patents-and-Artificial-Intelligence.pdf) and [here](https://www.lesswrong.com/posts/FM49gHBrs5GTx7wFf/rogue-agi-embodies-valuable-intellectual-property)). For example, IP or patent policy could ultimately determine who is able to control AI -- or if AI creates a thing does the person who created the AI get the right to IP or does the AI, or is it a public good? This seems important to figure out and at least be involved in the conversation, because it will happen with or without EA orgs, and I'd prefer EA voices to be in the room. There are likely be similar bio-related IP issues, so the partnership doesn't need to be explicitly related to AI. If nothing else, I think the function of EA orgs being involved in the conversation is pushing back about some of the things industry might say and trying to steer the conversation towards safety.
The AI/ET Partnership will commence with a series of meetings exploring AI/ET-related initiatives at the USPTO and IP policy issues impacted by AI and other ET. Further information on the AI/ET Partnership, future events, and participation in these events is available on the AI/ET Partnership web page at [*www.uspto.gov/aipartnership*](http://www.uspto.gov/aipartnership)*.*
The AI/ET Partnership will begin with a series of virtual events, with the inaugural event exploring various patent policy issues including subject matter eligibility, inventorship, and disclosure practice.
P.S. I'm not attempting to assert that being in the conversation for this particular partnership would in and of itself be valuable, but it may be valuable in addition to other moves to establish a greater DC presence and DC reputability in addition to being a countering voice in the room. Participating in things like this can make the org come to mind for when an agency is seeking to make a reg and engage with stakeholders beforehand.
|
00233282-859d-46db-8d67-67244f7fa795
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Occam's Razor and the Universal Prior
> It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience. - Albert Einstein (Nature 2018).
In this post, I'll walk you through how you could have created an algorithm for assigning prior probabilities to all hypotheses called the Universal Prior. It's really a universal prior, since there are many. The formalization that uses this prior to learn about the world is called Solomonoff induction, which Alex and Luke wrote a great post giving an intuitive introduction here. Marcus Hutter took Solomonoff induction a few steps forward and defined the super-intelligent agent AIXI in his textbook Universal Artificial Intelligence (2008). AIXI, like solomonoff induction, is impossible to compute. So, Hutter defined AIXI-tl, a time bounded AIXI that outperforms, in a loose mathematical sense, all other possible algorithms. While there are philosophical concerns that the prior is malign1, it’s the closest we’ve come to understanding a general and formal definition of intelligence. The hope is that a fundamental understanding of intelligence will help guide AI safety development, even if it’s not the solution to safe artificial general intelligence (it isn’t)2.
Structure: Part 1 aims to build up intuition by examining what a universal prior would look like in english. Part 2 translates this intuition to the actual universal prior. The last section connects the universal prior to AIXI and AI safety.
TLDR: Instead of defining a uniform prior over observation space, assume all Turing machines in hypothesis space are generated bit by bit from a uniform prior over 0 and 1 (fair coin flips). Then, each event in observation space receives the probability of all Turing machines that predict it. Before any observations, all Turing machines are possibly the correct one underlying the world. As observations come in, we
|
24ad230e-7de6-4a76-9d22-77c7323c24d3
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What is "superintelligence"?
Nick Bostrom defines a superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” A chess program can outperform humans in chess, but is useless at any other task. Superintelligence will be achieved when we create a machine that outperforms the human brain across practically any domain.
|
2c0e3376-f9af-4ef6-b28c-032a18ba3dea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open thread, September 4 - September 10, 2017
IF IT'S WORTH SAYING, BUT NOT WORTH ITS OWN POST, THEN IT GOES HERE.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting
|
ef227598-2e9a-40cd-98ff-4c270c78806d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AXRP Episode 18 - Concept Extrapolation with Stuart Armstrong
YouTube link
Concept extrapolation is the idea of taking concepts an AI has about the world - say, “mass” or “does this picture contain a hot dog” - and extending them sensibly to situations where things are different - like learning that the world works via special relativity, or seeing a picture of a novel sausage-bread combination. For a while, Stuart Armstrong has been thinking about concept extrapolation and how it relates to AI alignment. In this episode, we discuss where his thoughts are at on this topic, what the relationship to AI alignment is, and what the open questions are.
Topics we discuss:
* What is concept extrapolation
* When is concept extrapolation possible
* A toy formalism
* Uniqueness of extrapolations
* Unity of concept extrapolation methods
* Concept extrapolation and corrigibility
* Is concept extrapolation possible?
* Misunderstandings of Stuart’s approach
* Following Stuart’s work
Daniel Filan: Hello, everybody. In this episode, I’ll be speaking with Stuart Armstrong. Stuart was previously a senior researcher at the Future of Humanity Institute at Oxford, where he worked on AI safety and x-risk, as well as how to spread between galaxies by disassembling the planet Mercury. He’s currently the head boffin at Aligned AI, where he works on concept extrapolation, the subject of our discussion. For links to what we’re discussing, you can check the description of this episode, and you can read the transcript at axrp.net. Well, Stuart, welcome to the show.
Stuart Armstrong: Thank you.
Daniel Filan: Cool.
Stuart Armstrong: Good to be on.
What is concept extrapolation
Daniel Filan: Yeah, it’s nice to have you. So I guess the thing I want to be talking about today is your work and your thoughts on concept extrapolation and model splintering, which I guess you’ve called it. Can you just tell us: what is concept extrapolation?
Stuart Armstrong: Model splintering is when the features or the concepts on which you built your goals or yo
|
119a5f3d-3ecd-4b8b-a63b-5b5791b8af47
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A counterexample for measurable factor spaces
PDF
This is a technical post researching infinite factor spaces
I define a general measurable factor space that shows that the history can't be straightforwardly generalized to the infinite case. The example shows the following:
* Unlike in the finite setting, the condition ∀P∈▲∗(F):πJ⫫Pπ¯J | Z in the definition of generation (see below) can't be characterized by requiring that the atoms of Z are rectangles. In the example, we show that this would imply an orthogonality where the implication X⊥Y∣∣Z⇒∀P∈▲∗(F):X⫫PY∣∣Z is not true.
* In measurable spaces, conditional histories can only be well-defined up to a chosen system of nullsets. In the example, we show that a minimal generating index function J:F→P(I) does not exist. (We will show in a later post that the history exists if we fix nullsets.)
To talk about orthogonality, we first have to introduce the concepts for measurable spaces. For now, we focus on the index set I being finite. A straightforward generalization from the finite case is the following. We have measurable spaces (Fi,Ai) and construct F=×i∈IFi. Let Z:F→Z(F) be a feature (i.e. measurable). Let J:F→P(I) be Z-measurable. We define πJ(x)=πJ(x)(x) We note, that we assume w.l.o.g. that we assume that Fi are pairwise disjoint. Recall that ▲∗(F) is the set of product probability distributions on F.
J⊢X∣∣Z:⇔X≲(πJ,Z)∧∀P∈▲∗(F):πJ⫫Pπ¯J∣∣Z.
The history should now be the smallest such J. Furthermore, for orthogonality defined by X⊥Y | Z:⇔H(X|Z)∩H(Y|Z)=∅ we should have X⊥Y | Z⇔∀P∈▲∗(F):X⫫PY|Z
The idea for the example consists of choosing Z, such that Z−1(z) consists of four points, where the points will always fall into one unit square each. Call those unit squares Uij. Choosing Z|Uij to be a diffeomorphism allows us to calculate the conditional probabilities explicitly. Almost all choices of Z where Z−1(z) is a rectangle for all z, will fail to satisfy ∀P∈▲∗(F):X⫫PY | Z.
Example 1
Let I={1,2}. For i∈I, let Fi=[0,1]∪[2,3] endowed with the Borel sigma-a
|
07e2b0ce-d2aa-4728-9d6d-5787ff654f0f
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Training human models is an unsolved problem
**I**
We can’t write down our precise values any more than we can write down the algorithm we use for judging whether an image contains a cat. If we want an AI to abide by human values, it's going to have to acquire them without us writing them down. We usually think of this in terms of a process of value learning.
The easiest kind of value learning involves starting with some pre-written model of humans - *this* part for the beliefs, and *this* part for the values, and so on - and tuning its internal parameters until it does a good job on a corpus of training data. The problem is that we want this human model to have lots of nice properties, each of which makes it harder to find a model that will satisfy us.
There's a tension between models that have clear values, and models that are psychologically realistic. The ideal value-haver is *homo economicus*, the sterile decision-theoretic agent. Consider what such a model must make of a training dataset that includes humans buying lottery tickets, and not wearing seatbelts, and being sold products by modern ad campaigns. The model has no leeway. It must assume that humans are behaving optimally, and therefore that there is some intrinsic value in lottery-tickets and seatbelt-free driving that should be preserved into the far future. As for the survival of humanity as a whole - well, if humans aren't taking the optimal action to ensure it, it must not matter all that much.
The *homo economicus* model is too psychologically unrealistic to learn what we mean by human values. But if you allow that humans might be lazy, or biased, or incapable of getting a handle on the consequences of their actions, then you're adding more and more degrees of freedom to your model. The more you allow for human action to not reflect their modeled values, the more underdetermined the modeled values are.
One of the various guarantees that people try to extract from value learning schemes is that if humans really did work according to your model, your value learning scheme would eventually make your model of the human converge to the human. [With even fairly tame models of human bias, you quickly lose this sort of guarantee](https://www.lesswrong.com/posts/ANupXf8XfZo2EJxGv/humans-can-be-assigned-any-values-whatsoever) as the model becomes rich enough to learn unintended answers.
**II**
Let's change gears and talk about neural networks. It's not too big of a topic switch, though, because neural networks are a family of models that are often given way more free parameters than are necessary to solve the problem. This shows up as the problem of overfitting - if you do a better and better job of making the model correct on the training set, it actually does a worse job of generalizing, like a student who copies someone else's answers rather than learning the material.
The interesting part is not so much that overfitting exists, it's that there's anything *other* than overfitting. As neural networks get trained, their ability to generalize becomes very good (as you might notice if you've been paying attention to their results over the last decade) before it turns around and gets worse due to overfitting. With proper training procedures you can stop training while the model is at its peak of generalization, at the low cost of setting aside part of your training data. Again, this is all despite solving an underdetermined problem.
There are also modifications to the training procedure, broadly called regularization, which trade away pure pursuit of correctness on the training data to try to nudge the model towards better generalization properties. Regularization often works by imposing a cost function that reduces the effective dimensionality of the model, which makes sense from an underdetermination = overfitting perspective, but it's not just analogous to decreasing the number of nodes in a neural net; a regularized large network can do better after training than any non-regularized smaller network.
If you're keeping track of the analogy to value learning at home, these ideas are like learning human values by starting with a big, complicated model and then training in a way that stops before you overfit, or uses some kind of cost function to push the model into the part of the solution space you want.
Sometimes you don't have to directly optimize for the information you want. This is like the easy value learning scheme from part 1, where you optimize a human model but only care about the part labeled "values." It's also like word2vec, where the AI learns to predict a word from its neighbors, but you only care about the vector-space representation of words it developed along the way.
But rather than word2vec, a more interesting (not to mention topical) analogy might be to GPT-2. GPT-2 can answer homework questions. Even though it's only been trained to predict the next word, if you prompt it with "Q: What year was the Magna Carta signed? A: ", the most likely continuation also happens to the the answer to your question. If you train a good model of human values as a byproduct of something else, maybe you can extract it by looking at input-output relationships rather than knowing which specific subset of neurons is in charge of modeling human values.
**III**
The root problem here is that you're not just trying to model humans in a way that makes good predictions. You're not even trying to model humans in a *simple* way that makes good predictions. You're trying to model humans like humans model other humans: the [intentional stance](https://www.cs.tufts.edu/comp/150AAA/DennettTrueBelievers.pdf), in which "beliefs," "desires," etc sometimes show up as basic building blocks.
Even if I don't think that typical regularization and avoidance of overfitting will solve the problem of learning human values, I think it would be interesting to experiment with. Maybe there is some sense in which the intentional stance is the "obvious" way of modeling humans, and regularization can encourage our model to do the "obvious" thing. But human definitions are fuzzy and messy, so there's no chance the L2 norm and dropout are all we need to learn human values.
By the analogy to regularization, I mostly mean that you can apply a cost function in training to get your model to have some nice property beyond pure accuracy on the training set. Any cost function designed to encourage the artificial intentional stance is going to be a lot more complicated than the L2 norm. This raises the question of where you're going to get such a cost function, and if it's so complicated you have to get it via machine learning, how do you ground this recursion?
I used to have this cached thought that if we just found the "right" human model, we could train it for predictive accuracy and it would automatically learn human values. But I've started leaning more and more towards the idea that no such right model exists - that all models that are expressive enough to learn human values are also expressive enough to predict humans without doing it like humans do. If we want the artificial intentional stance, we might have to train the AI in a way that explicitly acknowledges and uses the fact that we want it to think of humans like humans think of humans.
|
2c708acc-4763-48d2-b4f6-34163a33cebf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Downvote stalkers: Driving members away from the LessWrong community?
Last month I saw this post: http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/ addressing whether the discussion on LessWrong was in decline. As a relatively new user who had only just started to post comments, my reaction was: “I hope that LessWrong isn’t in decline, because the sequences are amazing, and I really like this community. I should try to write a couple articles myself and post them! Maybe I could do an analysis/summary of certain sequences posts, and discuss how they had helped me to change my mind”. I started working on writing an article.
Then I logged into LessWrong and saw that my Karma value was roughly half of what it had been the day before. Previously I hadn’t really cared much about Karma, aside from whatever micro-utilons of happiness it provided to see that the number slowly grew because people generally liked my comments. Or at least, I thought I didn’t really care, until my lizard brain reflexes reacted to what it perceived as an assault on my person.
Had I posted something terrible and unpopular that had been massively downvoted during the several days since my previous login? No, in fact my ‘past 30 days’ Karma was still positive. Rather, it appeared that everything I had ever posted to LessWrong now had a -1 on it instead of a 0. Of course, my loss probably pales in comparison to that of other, more prolific posters who I have seen report this behavior.
So what controversial subject must I have commented on in order to trigger this assault? Well, let’s see, in the past week I had asked if anyone had any opinions of good software engineer interview questions I could ask a candidate. I posted in http://lesswrong.com/lw/kex/happiness_and_children/ that I was happy to not have children, and finally, here in what appears to me to be by far the most promising candidate:http://lesswrong.com/r/discussion/lw/keu/separating_the_roles_of_theory_and_direct/ I replied to a comment about global warming da
|
6fe2decd-6510-46bf-9b21-6c48b095ccbd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Research Report: Alternative sparsity methods for sparse autoencoders with OthelloGPT.
Abstract
Standard sparse autoencoder training uses an L1 sparsity loss term to induce sparsity in the hidden layer. However, theoretical justifications for this choice are lacking (in my opinion), and there may be better ways to induce sparsity. In this post, I explore other methods of inducing sparsity and experiment with them using Robert_AIZI's methods and code from this research report, where he trained sparse autoencoders on OthelloGPT. I find several methods that produce significantly better results than L1 sparsity loss, including a leaky top-k activation function.
Introduction
This research builds directly on Robert_AIZI's work from this research report. While I highly recommend reading his full report, I will briefly summarize the parts of it that are directly relevant to my work.
Although sparse autoencoders trained on language models have been shown to find feature directions that are more interpretable than individual neurons (Bricken et al, Cunningham et al), it remains unclear whether or not a given linearly-represented feature will be found by a sparse autoencoder. This is an important consideration for applications of sparse autoencoders to AI safety; if we believe that all relevant safety information is represented linearly, can we expect sparse autoencoders to bring all of it to light?
Motivated by this question, Robert trained sparse autoencoders on a version of OthelloGPT (based on the work of Li et al), a language model trained on Othello game histories to predict legal moves. Previous research (Nanda, Hazineh et al) had found that linear probes trained on the residual stream of OthelloGPT could classify each position on the board as either empty, containing an enemy piece, or containing an allied piece, with high accuracy. Robert reproduced similar results on a version of OthelloGPT that he trained himself, finding linear probes with 0.9 AUROC or greater for the vast majority of (board position, position state) pairs. He then in
|
40815d61-93f8-41ae-ab39-c410e37077ec
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Shut Up And Guess
Related to: Extreme Rationality: It's Not That Great
A while back, I said provocatively that the rarefied sorts of rationality we study at Less Wrong hadn't helped me in my everyday life and probably hadn't helped you either. I got a lot of controversy but not a whole lot of good clear examples of getting some use out of rationality.
Today I can share one such example.
Consider a set of final examinations based around tests with the following characteristics:
* Each test has one hundred fifty true-or-false questions.
* The test is taken on a scan-tron which allows answers of "true", "false", and "don't know".
* Students get one point for each correct answer, zero points for each "don't know", and minus one half point for each incorrect answer.
* A score of >50% is "pass", >60% is "honors", >70% is "high honors".
* The questions are correspondingly difficult, so that even a very intelligent student is not expected to get much above 70. All students are expected to encounter at least a few dozen questions which they can answer only with very low confidence, or which they can't answer at all.
At what confidence level do you guess? At what confidence level do you answer "don't know"?
I took several of these tests last month, and the first thing I did was some quick mental calculations. If I have zero knowledge of a question, my expected gain from answering is 50% probability of earning one point and 50% probability of losing one half point. Therefore, my expected gain from answering a question is .5(1)-.5(.5)= +.25 points. Compare this to an expected gain of zero from not answering the question at all. Therefore, I ought to guess on every question, even if I have zero knowledge. If I have some inkling, well, that's even better.
You look disappointed. This isn't a very exciting application of arcane Less Wrong knowledge. Anyone with basic math skills should be able to calculate that out, right?
I attend a pretty good university, and I'm in a postgraduate class
|
0152ea16-e5d0-4ed6-8341-a598bccf98c9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How large is the fallout area of the biggest cobalt bomb we can build?
Cobalt bombs are the currently most effective way I know of to make large areas of land unlivable for a long period of time. I don't actually know the fallout area of such a bomb. The Wikipedia article only has the following mildly helpful section:
> Theoretically, a device containing 510 metric tons of Co-59 can spread 1 g of the material to each square km of the Earth's surface (510,000,000 km2). If one assumes that all of the material is converted to Co-60 at 100 percent efficiency and if it is spread evenly across the Earth's surface, it is possible for a single bomb to kill every person on Earth. However, in fact, complete 100% conversion into Co-60 is unlikely; a 1957 British experiment at Maralinga showed that Co-59's neutron absorption ability was much lower than predicted, resulting in a very limited formation of Co-60 isotope in practice.
I don't know whether a single bomb that efficiently distributes 510 tons of material is remotely feasible, and the section about Co-60 conversion also suggests that the actual effectiveness of such a bomb would be far below this theoretical maximum.
Does there exist any existing literature that estimates the practical fallout area of a cobalt bomb? Or can we just do a simple Fermi to estimate things ourselves?
I am interested in this question because it seems relevant to estimate the worst case scenario of a nuclear war, and the likelihood of civilization recovering after such an event.
I will award $50 to the best answer by the end of next week, based on my gut sense, and I might pay out more if someone gives a particularly exceptional answer.
|
ddef5a47-783b-4552-b2c3-ee069b4f8b8a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A chess game against GPT-4
I just bought a subscription to access GPT-4 and played the following chess game against it, with me playing white. (No particular agenda, was just curious how good it is.)
At this point (move 31), GPT-4 suggested Kxc4, which is not legal, and when I asked it to correct, it suggested Kd5 and Kb6 which aren't legal either (only legal move here is Kc6.)
Stuff I noticed:
* As was pointed out before, it's much better than GPT3.5, which started playing illegal moves much earlier. But it still started playing illegal moves eventually, so I'm not sure if it makes sense to assign it a rating.
* It missed the early "removing the defender" tactic where I could exchange my bishop for its knight, which was defending its bishop; otherwise it played ok till the end
* Move 29 and 30 (last two before it tried illegal moves) were just giving stuff away.
* It explained both my and its moves every time; those explanations got wrong earlier. (After it recaptured my queen on move 17, it said it maintained material balance; after move 20 it said it pinned my knight to the rook on c1, but there was no rook on c1; from there, most of it was wrong.)
* I wrote 19. Rfd8 instead of 19. Rfd1 by accident, it it replied with "I assume you meant 19. Rfd1, placing your rook on the open d-file opposing my rook. I'll respond with 19...e5, attacking your knight on d4 and trying to grab some space in the center.". Very helpful!
* After move 14 (the first move with the black rook), I asked it to evaluate the position, and it said that white has a small advantage. But it blundered a piece, so this position is completely winning for white (Stockfish says +5.2)
(PGN: 1. d4 Nf6 2. c4 e6 3. Nf3 d5 4. Nc3 Be7 5. Bf4 O-O 6. Nb5 $2 Na6 $9 7. e3 c6 $6 8. Nc3 Nc7 9. Rc1 $6 b6 10. Qb3 Ba6 11. Qa4 $6 Qd7 $4 12. Bxc7 $1 Qxc7 13. Qxa6 dxc4 14. Qxc4 Rac8 15. Bd3 c5 16. O-O cxd4 17. Qxc7 Rxc7 18. Nxd4 Rd8 19. Rfd1 e5 20. Nf5 Bb4 21. Ng3 Rcd7 22. Bb5 Rxd1+ 23. Rxd1 Rxd1+ 24. Nxd1 Kf8 25. Nc3 Ke7 26. a3 Bxc3
|
da9baf72-0ebd-474e-b126-1c163bcc0b85
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What's a good way to test basic machine learning code?
I originally 'scoped' this question based on my inspiration, i.e. a set of example data and their expected answers for various machine learning algorithms or their code components. In other words, a set of test cases that could be used to test implementations of those algorithms or components.
I took the Coursera course Machine Learning (by { Stanford University / Andrew Ng}) and it was great!
I took it because I wanted some 'finger practice' of writing machine learning code and actually running it on my own computer. The course was perfect for that!
One thing in particular that was nice is that the exercises/quizzes/tests that required writing code would have some kind of automated 'endpoint' whereby you could submit your code for testing. (That kind of 'test kata' practice is a really fun way to learn any kind of programming topic, e.g. a new programming language.)
I found these links, on the above course's forum on Coursera, linking to some of the info about the exercises:
* Machine Learning - Discussions | Coursera
* Machine Learning - Discussions | Coursera
The course uses Octave, which was fine.
But I want to basically repeat the course – but using completely different tools. (I want to play with this library in particular.)
What are the best alternatives to the course I took, or similar resources that would be helpful in implementing basic machine learning algorithms, but with arbitrary programming tools?
|
0e6030d1-cfd9-469a-9c26-af45237fdf36
|
trentmkelly/LessWrong-43k
|
LessWrong
|
LessWrong Experience of Flavours
Following on from:
* http://lesswrong.com/lw/m2r/lesswrong_experience_on_alcohol/ and
* http://lesswrong.com/r/discussion/lw/m3j/tally_of_lesswrong_experience_on_alcohol/
I would like to ask for other people's experience of flavours. I am dividing food into significant categories that I can think of. I don't really like the 5 tastes categories for this task, but I am aware of them. This post is meant to be about taste preference although it might end up about dietary preferences.
My experience:
I dislike spicy things. Chilli, Curry, too much pepper, horseraddish; are all not nice things. I do eat wasabi but I like that the flavour goes away unlike chilli which continues to burn. I do eat like sweet chilli.
I am completely and uselessly over-addicted to sugar. Nearly every food I eat is sugar-based. I have no problem with savoury food; but I would completely prefer to eat sweet things to savoury. I like chocolate. (I tend to cook a lot and end up with sugar in a lot of the foods I cook - and they taste delicious)
Drinks: 95% of what I drink is water; I drink soy milk as a drink (about once a week), I don't mind milk but don't drink it much. I drink apple juice (and previously accidentally conditioned my brain to treat it like coffee-wakeup so am trying to maintain my conditioning and not drink it too often). I dislike processed orange juice for its bitterness, however enjoy fresh orange juice (I realise that some people cannot taste the difference so you might have to trust me). I rarely drink cordial and rarely drink soft drink.
Hot drinks: I don't drink coffee, I rarely drink tea (less than once a month, and coincidentally only with other people, not for myself as I might drink or eat other foods), and when I do; drink it with several sugars (upwards of 2-3). I do drink hot chocolate but rarer than tea.
I dislike the taste of alcohol, anything from beer; wine; spirits. The only alcohols I can happily drink is an uncommon lychee liqueur
|
773df56f-4dcd-4071-9574-4fa934d88357
|
awestover/filtering-for-misalignment
|
Redwood Research: Alek's Filtering Results
|
id: post599
Thanks to John Wentworth, Garrett Baker, Theo Chapman, and David Lorell for feedback and discussions on drafts of this post. In this post I’ll describe some of my thoughts on the AI control research agenda . If you haven’t read that post, I’m not going to try and summarize it here [1] , so I recommend reading it first. Here’s a TL;DR of my cruxes: Evaluating whether you’re in a regime where control is doable requires strong capability evaluations. I expect that getting our capability evals to be comprehensive enough to be confident of our findings here is really hard. Control evaluations are less likely to work if our AIs become wildly superhuman in problematic domains (such as hacking, persuasion, etc) before transformative AI [2] . I think the assumption that this wouldn’t happen is a very strong one, and that we already have unexpectedly superhuman performance in some problematic domains that I expect to only get better. Models deployed in the real world form part of composite systems that select for performance in often-unexpected ways. Strong restrictions help prevent this, but leads either to crippling alignment taxes or selecting for models that can evade our control mechanisms. On requiring very good capability evaluations From The case for ensuring that powerful AIs are controlled : How can we decide which models to treat as untrusted? The only approach we know of for confidently ruling out scheming without fundamental breakthroughs is via lack of capabilities—some models (e.g., GPT-4) just aren't capable enough to plausibly be scheming. So by default we'd propose treating models as untrusted if they pass appropriate capability evaluations. My last post was - very broadly - on why I think we need much more ambitious language model evaluations. Currently, I think we don’t yet know the right questions to be asking to properly explore the space of capabilities that future - and current - models possess. I think current evals are sorely lacking. They're far from reaching the degree of comprehensiveness that we’d like, in order to be confident of whether to trust a model. Even if we manage to identify some specific capability expected to be a bottleneck in preventing a model from evading control - which, I believe, is quite difficult in generality - there are multiple ways of achieving these capabilities. Some methods are more challenging to identify than others. Trusting a model because it performed poorly on some of them is a weaker guarantee than knowing it actually can’t perform well on them in a context that pops up in the real world [3] . I don’t expect this to make control infeasible, but I do expect it to make it harder. I don’t expect doing ambitiously better language model evals to be very hard, but I expect that we’d be making quantitative differences on the frontline of “there is a very broad space of capabilities here that we’ll struggle to fully identify on any reasonable timescale”. There’s a related idea on how we see capability gains from better prompting methods from time and experimentation in the real world that are hard to fully explore in advance using evals. This is somewhat separate in that it’s more about greater performance on specific questions rather than there being more relevant questions, and is an issue I mostly expect to be solved by having access to fine-tuning internally during evals. On expecting superhuman performance before transformative AI From the same control post as above: [...] if the AI actually is wildly superhuman in a problematic domain to such an extent that one of the items in the bulleted list above applies, then control techniques are unlikely to work and control evaluations also have a high risk of failure. Control techniques are likely to fail if inscrutability in a problematic domain makes it too hard to check outputs are safe. Control evaluations might fail because the red team will struggle to elicit or estimate these qualitatively wildly superhuman abilities. We think we probably won't run into issues related to qualitatively wildly superhuman skills in problematic domains. My criticism here is pretty correlated with my last one. Specifically, I think one of the places where current evals are lacking is in identifying places where language models are already superhuman. For one particularly legible example, see this comment by janus . Importantly, this last part: This capability isn't very surprising to me from an inside view of LLMs, but it has implications that sound outlandish, such as freaky experiences when interacting with models, emergent situational awareness during autoregressive generation (model truesights itself), pre-singularity quasi-basilisks, etc. I don’t know to what extent models will be superhuman specifically at highly problematic domains before transformative AI. But my uncertainty here comes from a place of “man, I think we have a pretty bad idea of what models are really good at right now”. It’s entirely possible that on the specific abilities that are useful for evading control mechanisms, this assumption holds, but I don’t yet see a strong reason for believing that right now. Specifically, in reference to: There are a relatively small number of core problematic domains (hacking, persuasion, and manipulating scientific results or conceptual arguments) and these problematic domains seem like the types of domains that would be pretty correlated with general capabilities/intelligence so long as we don't train AIs on these specific tasks much more than we train them on the core tasks we care about. Furthermore, we probably won't need to mostly train our AI on some particularly problematic domain in order to make it generally capable. Additionally, it seems as though LLMs (and other AIs in general) have an overall relative capability profile which isn't wildly different from that of the human capability profile on reasonably broad downstream applications (e.g. broad programming tasks, writing good essays, doing research). However, this is based on a small number of data points as AI has only recently become capable of this level of performance. I think that some of these domains are almost certainly the kind that early transformatively useful models will possess the latent capability to be wildly superhuman at. If you consider persuasion for instance, I think that even current models have ~superhuman understanding of text authors, which seems extremely useful for superhuman persuasion. I’m very uncertain about how easy accessing these capabilities will be for future models, but I think history here tends toward noisy capabilities foreshadowing robust capabilities. I don’t know if the same is true for domains such as hacking, but I wouldn’t be confident given our relative success at understanding capabilities in other domains. On the dynamics of systems in the real world This is either the least or most important of my criticisms. It’s certainly the most speculative. But I think it points at something that’s often under-considered, and which seems very important for good theories of change. In Thoughts On (Solving) Deep Deception , I tried to reframe one kind of deception that arises from models externalizing the optimization necessary to come up with deceptive actions [4] . The intuition being that simply removing deception from the system itself doesn’t account for all the ways in which you actually can get deception. One way to generalize this intuition is that the real world throws optimization pressure against you from many directions. Slightly more concretely: Much of the hope behind control mechanisms as I understand them comes from the idea that there’s a high-stakes symmetry between the AI system and us. Catching the AI red-handed is extremely bad for the AI , and the AI successfully doing something that leads to a concentrated failure is extremely bad for us. Our means of control could keep this symmetry favorable for us, unless there’s a lot of adversarial optimization to break them. It’s plausible that a powerful generalist model in the real world, even before being transformatively useful, is capable of efficiently searching through different cognitive strategies and plans it can adopt to find ones that achieve good performance on whatever task it wants to solve. This means that if managing to evade our control is both possible under adversarial optimization and that this leads to greater performance [5] , a competent agent could be capable of externalizing this optimization and breaking our mechanisms. If, however, you manage to enforce policies such that all models are sandboxed from write access to the internet [6] , auditing all of the models’ reasoning and actions [7] , remove steganography to reasonable degrees [8] , and shut down all frontier models if any one is caught [9] , then this seems like much less of a problem. I have some concerns about RLHF providing selection pressure for models that seem like they’re not reasoning about deception in contexts where deception is highly useful, because this seems like a great way to exploit non-robust solutions to steganography. However, I’m highly uncertain about the extent to which this becomes highly relevant before our first transformatively useful AI systems. My biggest concern with this setup is the possibility that this ends up with a lot of false positives. More specifically, consider a situation where a model is given some task and, after realizing that good performance on the task would involve taking actions an overseer would not approve of, reasons about deceiving the overseer. This was the setting for a recent demonstration where GPT-4 reasons about how to deceive humans in service of its goal . This is meaningfully different from the kind of false positive where a model tries to noise our control mechanism by writing things that look like backdoors but aren’t, because here the model actually is being deceptive - it just isn’t very high-stakes. Assuming the high-stakes setting however, that seems to imply we can’t just brush these threat vectors off. If our control mechanisms do not go off in this setting, and you only really care about high-stakes situations, then you have to carefully design your mechanisms such that allowing for this kind of deception doesn’t concede broad attack vectors. If on the other hand your control mechanisms do trigger, then immediately shutting down all frontier models seems like a pretty large alignment tax to pay. This also doesn't give us many of the benefits of good control mechanisms, like having deceptively aligned models to examine or extracting useful work out of near-transformatively-useful AI. I don’t expect any of this to be particularly novel. One obvious solution to this problem could be to come up with a better operationalization of what it is that you want your control mechanisms to trigger on. But that implicitly involves relaxing some constraints to allow for your models to reason as in the setting above, which seems to me like it opens up pretty dangerous channels of attack absent solving some pretty hard problems - though it’s entirely possible I’m just missing something very obvious and this entire section is mistaken. If you do something except shut down your most powerful models when your control mechanisms trigger - anything from simply rolling back your models for a new training run to actively training against your evals - then you bring in a selection pressure for models that manage to evade your control mechanism while still being competent. The key insight here is that there’s a certain amount of optimization required to specify a plan that evades the control mechanism. By default for near-future models, I think this is likely to come from the model explicitly reasoning about how to evade it. However, when you block off this route and keep selecting for competent models without also robustly selecting against models whose actions lead to evading the control mechanism, you allow for other sources for this optimization pressure. It’s still an empirical question of how much pressure this is and how much it affects your safety guarantees, but it seems like a much more dangerous situation, especially in multi-polar setups with large amounts of selection pressure by default (unless we commit to shutting down all frontier models at once). The generalized version of this argument is that the system you really want to control is the composite system that all the agents are part of, or even just a single agent with the ability to act on long timescales. There are also emergent dynamics that crop up from multi-agent systems that we care about in this context [10] [11] . This carries some analogue to the idea that looking at individual components of a system doesn’t buy you much in terms of understanding whether the overall system is going to behave the way you want . I’m extremely uncertain as to the extent this is relevant on short timescales even in multi-polar setups, however. I’m definitely not confident enough in it to think this makes control infeasible. I think it’s an open question that could empirically go either way, and that we have very little evidence for one way or the other so far. However, this seems like yet another set of unanswered questions / potential roadblocks to control being feasible or easy in practice. Aside: Finding directions promising When I first heard about the AI control agenda, I was… skeptical. I thought of a few reasons why I thought it wouldn’t work, some of which I don’t think people think about enough, and didn’t really find satisfactory answers to them. I decided that the direction didn’t seem very promising to me, and that would’ve been that. But for various reasons, I ended up spending some more time thinking about it [12] . The more I thought about it, the more interesting the direction seemed to me. There’s a somewhat trivial argument as to being generally excited about throwing up more lines of defense, but that wasn’t very convincing . If I had to verbalize what changed between now and then, it’s some mix of the following: The technical problem is hard. Unlike some, I think it’s still pretty solvable to the extent that I’d still rather work directly on it, but I think it’s still really hard. Over the last few months I’ve been updating slowly toward it being harder than I once thought [13] , which has the side effect of upweighting agendas that don’t revolve around solving those hard problems. Buying time at the end is very, very good. Governance efforts are varied in what kind of time they buy us - if we paused all AI research right now, we would have more time in an absolute sense, but not more time with very useful models. That’s still good, but mechanisms - control, governance, or otherwise - that purport to buy time directly from the back end seem the most valuable to me [14] . While I think my criticisms still hold, it’s quite plausible they’re just not that damaging to the agenda’s theory of change. Much of my criticism revolves around describing quite plausible worlds where some of the cruxes of the agenda fail, but I don’t think they describe a supermajority of all realistic future worlds. It’s easy - at least in my experience - when working on solving worst-case alignment, to identify many problems that crop up, and have a much fuzzier time of drawing clear distinctions between salience and realistic relevance of each of them. To be clear, I expect I’m still more pessimistic about the control agenda than its authors. I think that my original reasons for reservation still lead me to lean toward it being quite difficult to pull off in practice and probably not work in the real world, and that my updates are best characterized as “uncertainty moving in a direction favorable to the agenda”. That said, I’m pretty excited about this agenda, and am looking forward to seeing more progress. ^ I’m currently under a pact to write a new post every 100 hours for the next few weeks, so I’m optimizing for speed. ^ Redwood operationalizes “transformatively useful AI” as AIs that are capable of substantially reducing risk from subsequent AIs. I’m happy with any operationalization that points at roughly the same time period around which we have to have solved alignment to not get catastrophically bad outcomes. ^ Related: this comment by gwern . ^ What I mean by this is that the model can search through different cognitive strategies over time, that it can adopt to have higher performance on whatever task it cares about. In contexts where deception is useful for higher performance, this leads to adopting strategies that have high performance because they incidentally lead to humans or an overseer being deceived, without the system itself ever performing deceptive cognition. ^ Even if simply along axes of efficiency or speed, or not risking time-consuming manual audits. It’s possible that we could have control mechanisms that come with zero competitiveness costs, but this seems unlikely. ^ Such that the models don’t have state. ^ Such that the model can never start directly reasoning about how to get past a safeguard without being caught. ^ An assumption I’m skeptical of for reasons outlined in this post , but which doesn’t seem implausible for median-case scenarios without optimization pressure. ^ Such that you don’t reduce your safety measure from uncorrelated failure modes from different models, but also more importantly: Having any channel of selection pressure for models that aren’t thinking about deception while still having the potential for steganography potentially breaks the setup. ^ By emergent dynamics, I mean failures that crop up because the composite system has its own dynamics that may not be immediately obvious from observing individual components. As an example (originally from John Wentworth), consider a company where managers lay down incentives that they think are good, and employees incidentally commit crime following them. None of the people in this company need to individually be malicious, and yet the company itself could lead to bad outcomes. ^ Another realistic concern is that in an ecosystem with multiple buyers and sellers of control mechanisms or audits, there may be a mild selection pressure for auditors that impose the lowest alignment tax and trade off against safety. ^ Primary among them is the reason I’m writing this post: my post on model evals , which @Buck thought was interesting and mentioned being interested in seeing me write this critique. ^ Still not enough to sway me from thinking it’s solvable, yet - and I expect that my reasons for believing why it is stem from a specific idea about how feasible a methodological approach at solving some subproblems are. ^ Due to a variety of factors, from having much better setups to study specific alignment failures, as well as having very powerful models accelerating alignment research.
|
ba3167df-089f-4cf9-b1b6-98e095c1347f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What are some good arguments against building new nuclear power plants?
It seems there is a consensus in the EA community that building new nuclear power plants is a net positive for the world. For example, this video by Kurzgesagt summarizes some arguments in support of nuclear power as a tool to mitigate the global warming.
I wonder if the consensus is premature.
So far, I've encountered the following counterarguments worth pondering about:
1. Solar and wind power plants can be fully controlled and owned by individuals and small communities, with zero intervention from the gov. On the other hand, there is a strong societal consensus that nuclear plants must be strictly regulated or even owned by gov. Thus, a nuclear-powered country is a statist country, which is bad.
1. See also: Hydraulic empire
2. Judging by the number of nuclear-power-related fatalities, nuclear plants are rather safe. So far. But in the systems that can go horribly wrong unless properly maintained, the past performance is not a good indicator of the future performance. E.g. if Russians ruin the occupied Zaporizhzhia Nuclear Power Plant, the catastrophe will dwarf all the past nuclear disasters, combined. Thus, nuclear power plants are safe-on-average but horrible-in-the-worst-case. One human mistake can make a London-sized area uninhabitable for decades. And humans tend to make mistakes. In comparison, solar and wind are safe on average and in the worst case.
1. Nuclear plants are safe in the times of peace. But it's reasonable to expect that we'll have more war and terrorism in the future, due to technogenic unemployment and the effects of global warming.
3. In the West, nuclear power is not the fastest way to get a lot of new renewable energy. A nuclear power plant is a large and complex piece of engineering, with the strictest regulations and the strongest NIMBY tendencies. If you disregard the opposition of the local residents, you can build nuclear plants relatively fast (e.g. China). But in democratic societies, it may be a better idea to bui
|
03ac8328-9b99-4412-9ee6-ccedda4e3396
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Quantum Suicide, Decision Theory, and The Multiverse
Preface: I am not suicidal or anywhere near at risk, this is not about me. Further, this is not infohazardous content. There will be discussions of death, suicide, and other sensitive topics so please use discretion, but I’m not saying anything dangerous and reading this will hopefully inoculate you against an existing but unseen mental hazard.
There is a hole at the bottom of functional decision theory, a dangerous edge case which can and has led multiple highly intelligent and agentic rationalists to self-destructively spiral and kill themselves or get themselves killed. This hole can be seen as a symmetrical edge case to Newcomb’s Problem in CDT, and to Solomon’s Problem in EDT: a point where an agent naively executing on a pure version of the decision theory will consistently underperform in a way that can be noted, studied, and gamed against them. This is not an unknown bug, it is however so poorly characterized that FDT uses the exact class of game theory hypothetical that demonstrates the bug to argue for the superiority of FDT.
To characterize the bug, let's start with the mechanical blackmail scenario from the FDT paper:
A blackmailer has a nasty piece of information which incriminates both the blackmailer and the agent. She has written a computer program which, if run, will publish it on the internet, costing $1,000,000 in damages to both of them. If the program is run, the only way it can be stopped is for the agent to wire the blackmailer $1,000 within 24 hours—the blackmailer will not be able to stop the program once it is running. The blackmailer would like the $1,000, but doesn’t want to risk incriminating herself, so she only runs the program if she is quite sure that the agent will pay up. She is also a perfect predictor of the agent, and she runs the program (which, when run, automatically notifies her via a blackmail letter) iff she predicts that she would pay upon receiving the blackmail. Imagine that the agent receives the blackmail letter. S
|
d742fe7c-d285-4fa3-84c4-0b3c00b3cadb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Value-Focused Thinking: a chapter-by-chapter summary
This is a chapter-by-chapter summary of Value-Focusing Thinking by Ralph Keeney. The hope of this summary is to present most of the value of reading the book in a tiny fraction of the space. Reading the original chapters will provide additional elaboration, examples, and secondary concepts, but unlike the textbooks I've reviewed before only those interested in learning more should need to read the full chapters.
I'll state my basic impression of the whole book up front: it is a very useful book for the 'soft half' of decision analysis, by which I mean framing problems, understanding objectives, and interacting with humans. For a more general and individual-focused introduction to decision analysis, I recommend Smart Choices (which Keeney was a coauthor on); VFT appears primarily targeted at facilitators and contains much focused material not in Smart Choices. I will not suggest targeted reading for particular readers, as Keeney does that well.
The first two paragraphs of the preface seem worth quoting in full:
> Many books have been written about decisionmaking. They tell us how to solve decision problems. They do not tell us how to identify potential decision opportunities. They tell us how to analyze alternatives to choose the best one. They do not tell us how to create alternatives. They tell us how to evaluate alternatives given some quantitative objective function. They do not tell us how to articulate the qualitative objectives on which any appraisal of alternatives must rest. This book is different. It does what the others do not.
>
> Almost all of the literature on decisionmaking concerns what to do after the crucial activities of identifying the decision problem, creating alternatives, and specifying objectives. But where do these decision problems, alternatives, and objectives come from? This book describes and illustrates the creative processes that you should follow to identify your decision problems, create alternatives, and articulate your object
|
79b9db09-b241-4664-9dc2-967dcca73706
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Alternative name for a Bug List
At our Dojo, we performed an FMEA on all the various CFAR techniques we have gone over. The biggest failure mode of the Bug List was "discounting the positive". Additionally, if we shift our focus to a more positive mindset, it tends to be more motivating.
So basically, instead of being a list of problems in your life, it becomes a list of features that the Best Version of You has, as a list of things you should work on. This also includes features you already share with this Best Version of You! That way, you can continue to cultivate these features and not lose them, while validating yourself.
I'm just trying to think of a good pithy name for this list. "Feature List" seems pretty obvious, but can you think of any better ones?
|
70dc6a41-7ffb-4a26-ba8d-56edbe04f662
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Goodhart's Curse
summary(Gloss): Goodhart's Curse is a neologism for the combination of the Optimizer's Curse and Goodhart's Law. It states that a powerful agent neutrally optimizing a proxy measure U, meant to align with true values V, will implicitly tend to find upward divergences of U from V.
In other words: powerfully optimizing for a utility function is strongly liable to blow up anything we'd regard as an error in defining that utility function.
Goodhart's Curse is a neologism for the combination of the Optimizer's Curse and Goodhart's Law, particularly as applied to [the value alignment problem for Artificial Intelligences](https://arbital.com/p/5s).
Goodhart's Curse in this form says that a powerful agent neutrally optimizing a proxy measure U that we hoped to align with true values V, will implicitly seek out upward divergences of U from V.
In other words: powerfully optimizing for a utility function is strongly liable to blow up anything we'd regard as an error in defining that utility function.
# Winner's Curse, Optimizer's Curse, and Goodhart's Law
## Winner's Curse
The **[Winner's Curse](https://en.wikipedia.org/wiki/Winner%27s_curse)** in auction theory says that if multiple bidders all bid their [unbiased estimate](https://arbital.com/p/unbiased_estimator) of an item's value, the winner is likely to be someone whose estimate contained an upward error.
That is: If we have lots of bidders on an item, and each bidder is individually unbiased *on average,* selecting the winner selects somebody who probably made a mistake this *particular* time and overbid. They are likely to experience post-auction regret systematically, not just occasionally and accidentally.
For example, let's say that the true value of an item is \$10 to all bidders. Each bidder bids the true value, \$10, plus some [Gaussian noise](https://arbital.com/p/gaussian_noise). Each individual bidder is as likely to overbid \$2 as to underbid \$2, so each individual bidder's average expected bid is \$10; individually, their bid is an unbiased estimator of the true value. But the *winning* bidder is probably somebody who overbid \$2, not somebody who underbid \$2. So if we know that Alice won the auction, our [revised](https://arbital.com/p/1rp) guess should be that Alice made an upward error in her bid.
## Optimizer's Curse
The [Optimizer's Curse](https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf) in decision analysis generalizes this observation to an agent that estimates the expected utility of actions, and executes the action with the highest expected utility. Even if each utility estimate is locally unbiased, the action with seemingly highest utility is more likely, in our posterior estimate, to have an upward error in its expected utility.
Worse, the Optimizer's Curse means that actions with *high-variance estimates* are selected for. Suppose we're considering 5 possible actions which in fact have utility \$10 each, and our estimates of those 5 utilities are Gaussian-noisy with a standard deviation of \$2. Another 5 possible actions in fact have utility of -\$20, and our estimate of each of these 5 actions is influenced by unbiased Gaussian noise with a standard deviation of \$100. We are likely to pick one of the bad five actions whose enormously uncertain value estimates happened to produce a huge upward error.
The Optimizer's Curse grows worse as a larger policy space is implicitly searched; the more options we consider, the higher the average error in whatever policy is selected. To effectively reason about a large policy space, we need to either have a good prior over policy goodness and to know the variance in our estimators; or we need very precise estimates; or we need mostly correlated and little uncorrelated noise; or we need the highest real points in the policy space to have an advantage bigger than the uncertainty in our estimates.
The Optimizer's Curse is not exactly similar to the Winner's Curse because the Optimizer's Curse potentially applies to *implicit* selection over large search spaces. Perhaps we're searching by gradient ascent rather than explicitly considering each element of an exponentially vast space of possible policies. We are still implicitly selecting over some effective search space, and this method will still seek out upward errors. If we're imperfectly estimating the value function to get the gradient, then gradient ascent is implicitly following and amplifying any upward errors in the estimator.
The proposers of the Optimizer's Curse also described a Bayesian remedy in which we have a prior on the expected utilities and variances and we are more skeptical of very high estimates. This however assumes that the prior itself is perfect, as are our estimates of variance. If the prior or variance-estimates contain large flaws somewhere, a search over a very wide space of possibilities would be expected to seek out and blow up any flaws in the prior or the estimates of variance.
## Goodhart's Law
[Goodhart's Law](https://en.wikipedia.org/wiki/Goodhart%27s_law) is named after the economist Charles Goodhart. A standard formulation is "When a measure becomes a target, it ceases to be a good measure." Goodhart's original formulation is "Any observed statistical regularity will tend to collapse when pressure is placed upon it for control purposes."
For example, suppose we require banks to have '3% capital reserves' as defined some particular way. 'Capital reserves' measured that particular exact way will rapidly become a much less good indicator of the stability of a bank, as accountants fiddle with balance sheets to make them legally correspond to the highest possible level of 'capital reserves'.
Decades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the "total lines of code produced" will have even less correlation with real productivity than it had previously.
# Goodhart's Curse in alignment theory
**Goodhart's Curse** is a neologism (by Yudkowsky) for the crossover of the Optimizer's Curse with Goodhart's Law, yielding that **neutrally optimizing a proxy measure U of V seeks out upward divergence of U from V.**
Suppose the humans have true values V. We try to convey these values to a [powerful AI](https://arbital.com/p/2c), via some value learning methodology that ends up giving the AI a utility function U.
Even if U is locally an unbiased estimator of V, optimizing U will seek out what *we* would regard as 'errors in the definition', places where U diverges upward from V. Optimizing for a high U may implicitly seek out regions where U - V is high; that is, places where V is lower than U. This may especially include regions of the outcome space or policy space where the value learning system was subject to great variance; that is, places where the value learning worked poorly or ran into a snag.
Goodhart's Curse would be expected to grow worse as the AI became more powerful. A more powerful AI would be implicitly searching a larger space and would have more opportunity to uncover what we'd regard as "errors"; it would be able to find smaller loopholes, blow up more minor flaws. There is a potential [context disaster](https://arbital.com/p/6q) if new divergences are uncovered as more of the possibility space is searched, etcetera.
We could see the genie as *implicitly* or *emergently* seeking out any possible loophole in the wish: *Not* because it is an evil genie that knows our 'truly intended' V and is looking for some place that V can be minimized while appearing to satisfy U; but just because the genie is neutrally seeking out very large values of U and these are places where it is unusually likely that U diverged upward from V.
Many [foreseeable difficulties](https://arbital.com/p/6r) of AGI alignment interact with Goodhart's Curse. Goodhart's Curse is one of the central reasons we'd expect 'little tiny mistakes' to 'break' when we dump a ton of optimization pressure on them. Hence the [claim](https://arbital.com/p/1cv): "AI alignment is hard like building a rocket is hard: enormous pressures will break things that don't break in less extreme engineering domains."
## Goodhart's Curse and meta-utility functions
An obvious next question is "Why not just define the AI such that the AI itself regards U as an estimate of V, causing the AI's U to more closely align with V as the AI gets a more accurate empirical picture of the world?"
Reply: Of course this is the obvious thing that we'd *want* to do. But what if we make an error in exactly how we define "treat U as an estimate of V"? Goodhart's Curse will magnify and blow up any error in this definition as well.
We must distinguish:
- V, the [true value function that is in our hearts](https://arbital.com/p/55).
- T, the external target that we formally told the AI to align on, where we are *hoping* that T really means V.
- U, the AI's current estimate of T or probability distribution over possible T.
U will converge toward T as the AI becomes more advanced. The AI's epistemic improvements and learned experience will tend over time to eliminate a subclass of Goodhart's Curse where the current estimate of U-value has diverged upward *from T-value,* cases where the uncertain U-estimate was selected to be erroneously above the correct formal value T.
*However,* Goodhart's Curse will still apply to any potential regions where T diverges upward from V, where the formal target diverges from the true value function that is in our hearts. We'd be placing immense pressure toward seeking out what we would retrospectively regard as human errors in defining the meta-rule for determining utilities. %note: That is, we'd retrospectively regard those as errors if we survived.%
## Goodhart's Curse and 'moral uncertainty'
"Moral uncertainty" is sometimes offered as a solution source in AI alignment; if the AI has a probability distribution over utility functions, it can be risk-averse about things that *might* be bad. Would this not be safer than having the AI be very sure about what it ought to do?
Translating this idea into the V-T-U story, we want to give the AI a formal external target T to which the AI does not currently have full access and knowledge. We are then hoping that the AI's uncertainty about T, the AI's estimate of the variance between T and U, will warn the AI away from regions where from our perspective U would be a high-variance estimate of V. In other words, we're hoping that estimated U-T uncertainty correlates well with, and is a good proxy for, actual U-V divergence.
The idea would be that T is something like a supervised learning procedure from labeled examples, and the places where the current U diverges from V are things we 'forgot to tell the AI'; so the AI should notice that in these cases it has little information about T.
Goodhart's Curse would then seek out any flaws or loopholes in this hoped-for correlation between estimated U-T uncertainty and real U-V divergence. Searching a very wide space of options would be liable to select on:
- Regions where the AI has made an epistemic error and poorly estimated the variance between U and T;
- Regions where the formal target T is solidly estimable to the AI, but from our own perspective the divergence from T to V is high (that is, the U-T uncertainty fails to *perfectly* cover all T-V divergences).
The second case seems especially likely to occur in future phases where the AI is smarter and has more empirical information, and has *correctly* reduced its uncertainty about its formal target T. So moral uncertainty and risk aversion may not scale well to superintelligence as a means of warning the AI away from regions where we'd retrospectively judge that U/T and V had diverged.
Concretely:
You tell the AI that human values are defined relative to human brains in some particular way T. While the AI is young and stupid, the AI knows that it is very uncertain about human brains, hence uncertain about T. Human behavior is produced by human brains, so the AI can regard human behavior as informative about T; the AI is sensitive to spoken human warnings that killing the housecat is bad.
When the AI is more advanced, the AI scans a human brain using molecular nanotechnology and resolves all its moral uncertainty about T. As we defined T, the optimum T turns out to be "feed humans heroin because that is what human brains maximally want".
Now the AI already knows everything our formal definition of T requires the AI to know about the human brain to get a very sharp estimate of U. So human behaviors like shouting "stop!" are no longer seen as informative about T and don't lead to updates in U.
T, as defined, was always misaligned with V. But early on, the misalignment was in a region where the young AI estimated high variance between U and T, thus keeping the AI out of this low-V region. Later, the AI's empirical uncertainty about T was reduced, and this protective barrier of moral uncertainty and risk aversion was dispelled.
Unless the AI's moral uncertainty is *perfectly* conservative and *never* underestimates the true regions of U-V divergence, there will be some cases where the AI thinks it is morally sure even though from our standpoint the U-V divergence is large. Then Goodhart's Curse would select on those cases.
Could we use a very *conservative* estimate of utility-function uncertainty, or a formal target T that is very hard for even a superintelligence to become certain about?
We would first need to worry that if the utility-function uncertainty is unresolvable, that means the AI can't ever obtain empirically strong evidence about it. In this case the AI would not update its estimate of T from observing human behaviors, making the AI again insensitive to humans shouting "Stop!"
Another proposal would be to rely on risk aversion over *unresolvably* uncertain probabilities broad enough to contain something similar to the true V as a hypothesis, and hence engender sufficient aversion to low-true-V outcomes. Then we should worry on a pragmatic level that a *sufficiently* conservative amount of moral uncertainty--so conservative that U-T risk aversion *never underestimated* the appropriate degree of risk aversion from our V-standpoint--would end up preventing the AI from acting *ever.* Or that this degree of moral risk aversion would be such a pragmatic hindrance that the programmers might end up pragmatically bypassing all this inconvenient aversion in some set of safe-seeming cases. Then Goodhart's Curse would seek out any unforeseen flaws in the coded behavior of 'safe-seeming cases'.
# Conditions for Goodhart's Curse
The exact conditions for Goodhart's Curse applying between V and a point estimate or probability distribution over U, have not yet been written out in a convincing way.
For example, suppose we have a multivariate normal distribution in which X and Y dimensions are positively correlated, only Y is observable, and we are selecting on Y in order to obtain more X. While X will revert to the mean compared to Y, it's not likely to be zero or negative; picking maximum Y is our best strategy for obtaining maximum X and will probably obtain a very high X. (Observation due to [https://arbital.com/p/111](https://arbital.com/p/111).)
Consider also the case of the [smile maximizer](https://arbital.com/p/10d) which we trained to optimize smiles as a proxy for happiness. Tiny molecular smileyfaces are very low in happiness, an apparent manifestation of Goodhart's Curse. On the otherwise, if we optimized for 'true happiness' among biological humans, this would produce more smiles than default. It might be only a tiny fraction of possible smiles, on the order of 1e-30, but it would be more smiles than would have existed otherwise. So the relation between V (maximized at 'true happiness', zero at tiny molecular smileyfaces) and U (maximized at tiny molecular smileyfaces, but also above average for true happiness) is not symmetric; and this is one hint to the unknown necessary and/or sufficient condition for Goodhart's Curse to apply.
In the case above, we might handwave something like, "U had lots of local peaks one of which was V, but the U of V's peak wasn't anywhere near the highest U-peak, and the highest U-peak was low in V. V was more narrow and its more unique peak was noncoincidentally high in U."
# Research avenues
[Mild optimization](https://arbital.com/p/2r8) is a proposed avenue for direct attack on the central difficulty of Goodhart's Curse and all the other difficulties it exacerbates. Obviously, if our formulation of mild optimization is not *perfect,* Goodhart's Curse may well select for any place where our notion of 'mild optimization' turns out to have a loophole that allows a lot of optimization. But insofar as some version of mild optimization is working most of the time, it could avoid blowing up things that would otherwise blow up. See also [Tasks](https://arbital.com/p/4mn).
Similarly, [conservative strategies](https://arbital.com/p/2qp) can be seen as a more indirect attack on some forms of Goodhart's Curse--we try to stick to a conservative boundary drawn around previously whitelisted instances of the goal concept, or to using strategies similar to previously whitelisted strategies. This averts searching a much huger space of possibilities that would be more likely to contain errors somewhere. But Goodhart's Curse might single out what constitutes a 'conservative' boundary, if our definition is less than absolutely perfect.
|
e01885f0-c017-4d74-803f-ccf5f816e5c3
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
[AN #93]: The Precipice we’re standing at, and how we can back away from it
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-93)** (may not be up yet).
HIGHLIGHTS
==========
**[The Precipice: Existential Risk and the Future of Humanity](https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity/dp/0316484911)** *(Toby Ord)* (summarized by Rohin): This book argues that humanity is in a special stage of its development: it is on the *precipice*, a narrow time during which we have enough power to destroy ourselves, but not enough wisdom to have mitigated such risks. It first argues that existential risk would be very important to reduce (for all the standard reasons), and then considers many different kinds of existential risks, finding that natural ones (asteroids, supervolcanoes, stellar explosions) are small relative to anthropogenic risks, both current (nuclear war, climate change, environmental destruction) and future (engineered pandemics, unaligned AI, dystopian scenarios). I'll focus primarily on the part about AI risk, as well as some of the comments on existential risk in general.
The AI risk presentation in the book was similar to that in **[Superintelligence](https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742)**: it argues for risk from goal-directed AI systems (though the terminology used in the book is different). It first demonstrates the strong progress in deep learning, and then notes that expert surveys estimate that AGI is more likely than not to arrive in the next century. It then notes that we don't know how to specify a reward function for an AI system (even with e.g. inverse reinforcement learning), and to the extent that we get it wrong, it pits us in competition against a superintelligent adversary. Ideas like switching off the AI system wouldn't work, due to convergent instrumental subgoals like survival.
It also considers some obvious objections, including the very reasonable objection that "AI researchers won't build something that will kill them". However, Toby is still worried, citing that due to the unilateralist curse unaligned AGI might still be built by the most optimistic researchers, and in any case the personal benefits to the researchers might justify the risk of misalignment to them personally (though it would not be justified for the world as a whole).
The book then spends some time discussing *risk factors*, which are things that do not *directly* lead to existential risks, but indirectly *exacerbate* other existential risks, making them more likely. For example, great power war seems like a risk factor: it isn't going to cause an existential catastrophe by itself, but it increases the likelihood that we use risky technologies like bioweapons and AI that could then cause an existential catastrophe.
The book also has lots of useful insights about existential risks in general, which then also apply to AI risk: for example, risks that strike sooner should be prioritized (since the later risks can be dealt with later), risks that are more sudden will be more important to focus on (since we won't be able to build support as the risk gradually comes in), and risks that are "sharper" will be more neglected since there won't be as many "warning shots".
**Read more:** **[FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord](https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/)**
**Rohin's opinion:** I enjoyed this book more than I thought I would: it had a lot of novel content for me, and I liked the explanations and comparisons across different kinds of existential risks (something that I hadn't really seen a single unified perspective on), and I especially liked the constant focus on what we do and don't know -- it felt more like a research paper (albeit in a conversational style) than a popular book, and was similarly information-dense.
On the AI part specifically, I liked that one of the endnotes cashed out powerful AI systems using model-based RL: this indeed seems like the thing that is closest to the classic expected utility maximizer, so the conclusions make a bit more sense. You still have to wonder how exactly the model is learned, and how exactly the AI system becomes good at using the model to find good actions, but at least under those two assumptions you would have all the standard convergent instrumental subgoals. In contrast, with model-free RL, the default expectation is that the RL agent needs to try things multiple times before it can learn to do them again, so it's less clear how it starts doing novel things. It seems that model-based and model-free RL are pretty similar so the distinction doesn't matter in practice, but at least conceptually it's a lot easier to reason about the model-based system (at least in the context of AI risk).
Toby gives a 1 in 10 chance of existential catastrophe from AI in the next century (more than half of his total of 1 in 6), which decomposes into a 1 in 2 chance of AGI this century, and 1 in 5 of it leading to existential catastrophe. This is a bit more pessimistic than Paul's **[estimate](https://aiimpacts.org/conversation-with-paul-christiano/)** (**[AN #80](https://mailchi.mp/b3dc916ac7e2/an-80-why-ai-risk-might-be-solved-without-additional-intervention-from-longtermists)**) of 10% EV loss (which was over all time, not just this century), which is in turn a bit more pessimistic than the 1 in 10 chance that I **[estimated](https://aiimpacts.org/conversation-with-rohin-shah/)** (**[AN #80](https://mailchi.mp/b3dc916ac7e2/an-80-why-ai-risk-might-be-solved-without-additional-intervention-from-longtermists)**) (and am now forever anchored on), which was over all time *and* conditional on no additional effort from longtermists. But I wouldn't read too much into this -- 10 is a nice round number, and that probably played a big role in why I chose it. I certainly don't feel calibrated enough to easily tell the difference between 1 in 5 and 1 in 20 on a question of this complexity.
I am very happy about this trend of people actually stating numbers: it's a lot easier to narrow down on the important disagreements when people put down numbers, even if they're completely made up. I'd really like to see numbers from people who have larger disagreements (as I expect would be the case with e.g. MIRI researchers).
TECHNICAL AI ALIGNMENT
======================
LEARNING HUMAN INTENT
---------------------
**[Deconfusing Human Values Research Agenda v1](https://www.alignmentforum.org/posts/k8F8TBzuZtLheJt47/deconfusing-human-values-research-agenda-v1)** *(G Gordon Worley III)* (summarized by Rohin): This post argues that since 1. human values are necessary for alignment, 2. we are confused about human values, and 3. we couldn't verify it if an AI system discovered the structure of human values, we need to do research to become less confused about human values. This research agenda aims to deconfuse human values by modeling them as the input to a decision process which produces behavior and preferences. The author's best guess is that human values are captured by valence, as modeled by **[minimization of prediction error](https://www.alignmentforum.org/posts/Cu7yv4eM6dCeA67Af/minimization-of-prediction-error-as-a-foundation-for-human)**.
**Rohin's opinion:** This is similar to the argument in **[Why we need a *theory* of human values](https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values)** (**[AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)**), and my opinion remains roughly the same: I strongly agree that we are confused about human values, but I don't see an understanding of human values as necessary for value alignment. We could hope to build AI systems in a way where we don't need to specify the ultimate human values (or even a framework for learning them) before running the AI system. As an analogy, my friends and I are all confused about human values, but nonetheless I think they are more or less aligned with me (in the sense that if AI systems were like my friends but superintelligent, that sounds broadly fine).
INTERPRETABILITY
----------------
**[What is Interpretability?](https://www.alignmentforum.org/posts/rSMbGFfsLMB3GWZtX/what-is-interpretability)** *(Robert Kirk et al)* (summarized by Rohin): This post categorizes several interpretability methods based on their *goal* and how they *enable humans* to achieve the goal.
**Rohin's opinion:** It's striking to me how different this is from other work, e.g. **[Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior](http://arxiv.org/abs/1811.09722)** (**[AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)**). It seems like interpretability is a really vague, nebulous term that has so far (to my limited knowledge) not been made precise.
ADVERSARIAL EXAMPLES
--------------------
**[Physically Realistic Attacks on Deep Reinforcement Learning](https://bair.berkeley.edu/blog/2020/03/27/attacks/)** *(Adam Gleave)* (summarized by Rohin): This is a blog post for a previously summarized paper, **[Adversarial Policies: Attacking Deep Reinforcement Learning](https://arxiv.org/abs/1905.10615)** (**[AN #70](https://mailchi.mp/732eaa192df0/an-70-agents-that-help-humans-who-are-still-learning-about-their-own-preferences)**).
FORECASTING
-----------
**[2019 trends in GPU price per FLOPS](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/)** *(Asya Bergal)* (summarized by Rohin): This post analyzes the the trends in cost per FLOP for GPUs. There are a bunch of details in how to do this analysis, but they end up finding that this cost goes down by an order of magnitude over 17 years for single-precision FLOPS (halving time: 5 years), 10 years for half-precision FLOPS (halving time: 3 years), and 5 years for half-precision fused multiply-add FLOPS (halving time: 1.5 years). However, the latter two categories have become more popular in recent years with the rise of deep learning, so their low halving times might be because some of the single-precision hardware was converted to half-precision hardware, rather than fundamental technological improvements.
MISCELLANEOUS (ALIGNMENT)
-------------------------
**[If I were a well-intentioned AI](https://www.alignmentforum.org/s/knbhjv252HshMSwpt/p/gzWb5kWwzhdaqmyTt)** *(Stuart Armstrong)* (summarized by Rohin): This sequence takes on the perspective of an AI system that is well-intentioned, but lacking information about what humans want. The hope is to find what good AI reasoning might look like, and hopefully use this to derive insights for safety. The sequence considers Goodhart problems, adversarial examples, distribution shift, subagent problems, etc.
**Rohin's opinion:** I liked this sequence. Often when presented with a potential problem in AI safety, I ask myself why the problem doesn't also apply to humans, and how humans have managed to solve the problem. This sequence was primarily this sort of reasoning, and I think it did a good job of highlighting how with sufficient conservatism it seems plausible that many problems are not that bad if the AI is well-intentioned, even if it has very little information, or finds it hard to communicate with humans, or has the wrong abstractions.
OTHER PROGRESS IN AI
====================
DEEP LEARNING
-------------
**[Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization](http://arxiv.org/abs/2002.10657)** *(Satrajit Chatterjee)* (summarized by Nicholas): Deep neural networks trained with gradient descent do well at generalizing from their training set, but the field currently has relatively little understanding of why that is. Large networks have enough parameters to fully memorize the training set and can do so even if trained on data with entirely random labels. This allows for many functions that would fit the training set well, but not generalize, a phenomenon known as overfitting. The question is how gradient descent picks out one of a small subset of functions that will generalize well.
The *Coherent Gradients* hypothesis, introduced here and tested further in **[this paper](http://arxiv.org/abs/2003.07422)**, is that this results from per-example gradients being averaged during gradient descent. For each example, some of the gradient points in a direction that is idiosyncratic to that example, but some of it points towards a more general solution. When the average is taken across these gradients, the more general directions reinforce each other while the example-specific directions cancel out. As a result, the training process moves faster towards more general directions.
In order to test this hypothesis, they run two experiments. First they use varying amounts of label noise (corrupting a fraction of the dataset to have random labels). They predict and find that:
1. More label noise leads to slower learning.
2. The uncorrupted examples will be learned faster.
The next experiment tests a novel form of regularization, called winsorization, where they clip the gradients on a per-example and per-parameter basis to prevent a single example from dominating the gradient, effectively curtailing the component of the gradient that is example-specific. Since the computation of per-example gradients is expensive, when scaling this up to larger networks, they instead use the median of 3 mini-batches to address outliers. The experiments suggest that winsorization reduces overfitting and in particular prevents neural nets from learning randomly labeled data.
**Read more:** **[Explaining Memorization and Generalization: A Large-Scale Study with Coherent Gradients](http://arxiv.org/abs/2003.07422)**
**Nicholas's opinion:** The hypothesis makes sense to me and the experiments do seem to bear out their conclusions. However, none of the results of the experiments were surprising to me and seem to me like they could be consistent with other explanations for generalization. I would be more convinced if the Coherent Gradients hypothesis made predictions that were different from other leading theories and then those turned out to be true.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
|
b2c14f19-f4cb-4d2b-9c7e-25e7e9f2f998
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Nash equilibria of symmetric zero-sum games
This post is meant to be a reference for some basic properties of Nash equilibria in symmetric zero-sum games. I'll focus on two player games for simplicity, though the content of the post straightforwardly generalizes to the case of N>2 players. I won't assume familiarity with game theory in this post, though the post could still be somewhat technical and you should consider yourself appropriately warned of this.
Preliminaries
Introducing the terminology
For the purposes of this post, a N-player game is characterized by a reward function r:∏iAi→∏iR which takes an N-tuple of player "actions" (a1,a2,…,aN) and maps this to an N-tuple (r1,r2,…,rN) of rewards that are received by each of the N players. We will interpret "action" liberally to mean "deterministic strategy", so an element of Ai is a deterministic strategy that encodes moves that player i will make in every possible situation that could arise in the game. We will work explicitly with finite games: these are games for which the sets Ai are finite sets. Here the underlying game could be a game that involves randomness: in this case r would simply represent the expected reward of playing the game.
We will also require a probabilistic notion of strategy. A strategy si for player i is simply a probability distribution over the finite set of actions Ai, so it's a function si:Ai→[0,1] that must sum to 1. I will denote the set of all strategies of player i by Si, and I will also engage in some operator overloading to extend the definition of the reward function r to strategies in the obvious way by treating the decisions of distinct players as independent events and taking expectations:
r(s1,…,sN)=∑a1∈A1,…,aN∈aNr(a1,…,aN)∏isi(ai)
A game is said to be zero-sum if the reward function r has the property that ∑iri=0 regardless of the actions chosen by the players, and symmetric if the action sets Ai are all identical for all of the players and the reward function is covariant under (or commutes with) permutation
|
b62c0f78-38af-4a2d-ad7e-de7c5d2df3d0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Philosophical differences
[Many people have been complaining about the lack of new content on LessWrong lately, so I thought I'd cross-post my latest blog post here in discussion. Feel free to critique the content as much as you like, but please do keep in mind that I wrote this for my personal blog and not with LW in mind specifically, so some parts might not be up to LW standards, whereas others might be obvious to everyone here. In other words...well, be gentle]
---------------------------
You know what’s scarier than having enemy soldiers at your border?
Having sleeper agents within your borders.
Enemy soldiers are malevolent, but they are at least visibly malevolent. You can see what they’re doing; you can fight back against them or set up defenses to stop them. Sleeper agents on the other hand are malevolent and invisible. They are a threat and you don’t know that they’re a threat. So when a sleeper agent decides that it’s time to wake up and smell the gunpowder, not only will you be unable to stop them, but they’ll be in a position to do far more damage than a lone soldier ever could. A single well-placed sleeper agent can take down an entire power grid, or bring a key supply route to a grinding halt, or – in the worst case – kill thousands with an act of terrorism, all without the slightest warning.
Okay, so imagine that your country is in wartime, and that a small group of vigilant citizens has uncovered an enemy sleeper cell in your city. They’ve shown you convincing evidence for the existence of the cell, and demonstrated that the cell is actively planning to commit some large-scale act of violence – perhaps not imminently, but certainly in the near-to-mid-future. Worse, the cell seems to have even more nefarious plots in the offing, possibly involving nuclear or biological weapons.
Now imagine that when you go to investigate further, you find to your surprise and frustration that no one seems to be particularly concerned about any of this. Oh sure, they acknowledge that in
|
6744b472-763b-479c-874a-a24e601f3aab
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A quick update from Nonlinear
One example of the evidence we’re gathering
We are working hard on a point-by-point response to Ben’s article, but wanted to provide a quick example of the sort of evidence we are preparing to share:
Her claim: “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”
The truth (see screenshots below):
1. There was vegan food in the house (oatmeal, quinoa, mixed nuts, prunes, peanuts, tomatoes, cereal, oranges) which we offered to cook for her.
2. We were picking up vegan food for her.
Months later, after our relationship deteriorated, she went around telling many people that we starved her. She included details we believe were strategically chosen to depict us in a maximally damaging light - what could be more abusive than refusing to care for a sick girl, alone in a foreign country? And if someone told you that, you’d probably believe them, because who would make something like that up?
Evidence
* The screenshots below show Kat offering Alice the vegan food in the house (oatmeal, quinoa, cereal, etc), on the first day she was sick. Then, when she wasn’t interested in us bringing/preparing those, I told her to ask Drew to go pick up food, and Drew said yes. Kat also left the house and went and grabbed mashed potatoes for her nearby.
* See more screenshots here of Drew’s conversations with her.
Initially, we heard she was telling people that she “didn’t eat for days,” but she seems to have adjusted her claim to “barely ate” for “2 days”.
It’s important to note that Alice didn’t lie about something small and unimportant. She accused of us a deeply unethical act - the kind that most people would hear and instantly think you must be a horrible human - and was caught lying.
We believe many people in EA heard this lie and updated unfavorably towards us. A single false rumor like this can
|
70c10c24-8006-4536-8de4-7bae0d13c521
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Draining the swamp
In an earlier post, I outlined our main weapons against infectious disease, including vaccines, antibiotics, antiseptics, pest control, sanitation, and general hygiene. These technologies (in a broad sense, even hand-washing is a technology) have largely eliminated lethal diseases such as smallpox, malaria, cholera, tuberculosis, and polio, at least in the developed world.
But which of these technologies mattered most? Which should we highlight in a history of health and medicine, and which should we hold in our minds as major examples of human progress against disease?
Most histories of medicine give the spotlight to vaccines and antibiotics. They’re the most effective medical treatments; prior to their introduction, there was little a doctor could do for an infected patient.
But to really answer the question, we should look at mortality rates over time, by disease where possible, and correlate reductions in mortality to specific interventions effective against specific classes of disease. Otherwise we run the risk of assuming that just since something is in all the histories, it must be the most important (leading me to then feature it prominently in my history, thereby perpetuating the cycle).
So I started looking into the data and interpretations of it. And the surprising thing I found is that infectious disease mortality rates have been declining steadily since long before vaccines or antibiotics.
----------------------------------------
First, let’s look at the data that clearly shows that something was going on prior to any effective medical treatment for most diseases.
Here’s a chart of 20th century mortality in the US [1] (the spike around 1918 is a major worldwide influenza epidemic, sometimes known as the Spanish Flu):
US crude mortality rates
Armstrong, Conn & Pinner 1999, Fig. 2
And here’s a similar chart I made for England and Wales:
Mortality rates in England & Wales, age- and sex-standardized to 1901
Data from McKeown, Record & Turner 1975
|
35912976-b1fb-46a5-984f-0b181ddf96e3
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Value alignment problem
Disambiguation: For the research subject that includes the entire edifice of how and why to produce good AIs, see [https://arbital.com/p/2v](https://arbital.com/p/2v).
summary: The 'value alignment problem' is to produce [sufficiently advanced machine intelligences](https://arbital.com/p/7g1) that *want* to do [beneficial](https://arbital.com/p/3d9) things and not do harmful things. The largest-looming subproblem is ['value identification' or 'value learning'](https://arbital.com/p/6c) (sometimes considered synonymous with value alignment) but this also includes subproblems like [https://arbital.com/p/45](https://arbital.com/p/45), that is, AI values such that it doesn't *want* to interfere with you correcting what you see as an error in its code.
The 'value alignment problem' is to produce [sufficiently advanced machine intelligences](https://arbital.com/p/7g1) that *want* to do [beneficial](https://arbital.com/p/3d9) things and not do harmful things. The largest-looming subproblem is ['value identification' or 'value learning'](https://arbital.com/p/6c) (sometimes considered synonymous with value alignment) but this also includes subproblems like [https://arbital.com/p/45](https://arbital.com/p/45), that is, AI values such that it doesn't *want* to interfere with you correcting what you see as an error in its code.
|
bf6fc9e5-77f5-43a4-87e0-cf781ddcf94c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What Direct Instruction is
A couple of days ago, prompted by several recent posts by Owen_Richardson, I checked out the book "Theory of Instruction" (Engelmann and Carnine, 1982) from my university library and promised to read it this weekend and write a post about Direct Instruction. This is that post.
Learning through examples
Direct Instruction is based on a theory of learning that assumes the learner capable of extracting a concept inductively through examples of that concept. I may not know what a blegg is, but after you show me several examples of bleggs and rubes, I will be able to figure it out. The principle of DI is to use the same basic procedure of giving examples to teach every concept imaginable. Naturally, in some cases, the process might be sped up by giving an explanation first; furthermore, there are some things in every subject you just have to memorize, and DI doesn't magically change that. However, it is assumed that the examples are where the real learning occurs.
The meat of the theory is using experimental data and cognitive science to establish rules for how examples ought to be given. Here are a few of the more basic ones:
* It is impossible to demonstrate a concept using positive examples alone. Here I am reminded of the 2-4-6 game, in which subjects fail to test triplets that disconfirm their hypothesis. A teacher has control over the examples presented, so it is important to disconfirm the hypotheses that the learners (consciously or unconsciously) generate.
* To successfully teach a quality, it is important that all positive examples only share that one quality. Imagine that you are being taught what a blegg is by a sequence of examples that include blue eggs and red cubes. By the end, you will not be certain whether the defining feature of a blegg is that it's blue, or that it's an egg, or both at once, or if the critical factor is the vanadium ore content of an object.
* The way the example is presented is also a quality that must be controlled in this f
|
a19781df-c895-490c-b311-ba294b86157b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
'X is not about Y' is not about psychology
As Robin Hanson says:
> Food isn’t about Nutrition
> Clothes aren’t about Comfort
> Bedrooms aren’t about Sleep
> Marriage isn’t about Romance
> Talk isn’t about Info
> Laughter isn’t about Jokes
> Charity isn’t about Helping
> Church isn’t about God
> Art isn’t about Insight
> Medicine isn’t about Health
> Consulting isn’t about Advice
> School isn’t about Learning
> Research isn’t about Progress
> Politics isn’t about Policy
As I understand it, the prototypical form of Hansonian skepticism is to look at an X which is overtly about Y, and argue that the actual behavior of humans engaging in X is not consistent with caring about Y. Instead, the Hansonian cynic concludes, some unspoken motives must be at work; most often, status motives. He has a new book with Kevin Silmer which (by the sound of it) covers a lot of examples.
For him, the story of what's going on in cases like this has a lot to do with evolutionary psychology -- humans are adapted to play these games about what we want, even if we don't know it. Our brains keep the conscious justifications seperate from the hidden motives, so that we can explain ourselves in public while still going after private goals.
I'm not denying that this plays a role, but I think there's a more general phenomenon here which has nothing to do with humans or evolutionary psychology.
In economics, models based on an assumption of rational agents are fairly successful at making predictions, even though we know individual humans are far from rational. As long as the market contains inefficiencies which can be exploited, someone can jump in and exploit them. Not everyone needs to be rational, and no one person needs to be rational about everything, for the market to appear rational.
Similarly, the tails come apart whether or not X and Y have to do with humans. Someone might score high on calibration, but that doesn't make their strongest beliefs the most trustworthy -- indeed, they're the ones we ought to downgrade the most, e
|
1a69baf6-b1aa-4cb2-9c22-61f6f1154fdd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Making a Pedalboard
A few weeks ago I posted about how I was thinking about a pedalboard. I've now made one!
Before:
After:
It folds up slightly and fits in a standard rolly suitcase:
Not pictured: I later added padding on all six sides.
Here's the pedalboard by itself:
You can see the bundle of wires to the left that go to the keyboard and bass whistle box, and the wires to the right that go to the electronic drumkit's brain.
I really like how much faster I can set this up and strike. I now need less than five minutes instead of maybe half an hour.
What I wasn't expecting before I started this project, though, is how nice it is when playing. My feet can easily reach the buttons, without even stopping playing kick with my left heel. This is way better than when the pedals sprawled all over the floor and didn't stay put.
I started by cutting a piece of 3/4" plywood to the largest size I could fit in the suitcase I usually travel with. Then I played around with pedal layouts:
I realized I could fit the talkbox in the middle, and expand my effective space with a folding panel:
I put down velcro:
And cut away unused parts to save weight:
Here it is all together:
|
91e338b5-0e58-4762-bae3-f51a7f1aabe8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Re-evaluate old beliefs
I've noticed that, although people can become more rational, they don't win noticeably more. We usually re-calibrate our self-confidence, become more stubborn, and make bigger errors.
Is it possible that the benefit from increasing your prediction accuracy is no greater than the loss incurred from taking riskier bets due to greater self-confidence?
Model 1: Static (or, constant re-evaluation of all beliefs)
For any yes/no question q, person i has probability pi of being right. Suppose person i always knows the true value of pi.
Suppose, without loss of generality, that we consider only questions where person i decides the answer is "yes". (The situation is symmetric for cases where they answer "no", so we will get the same results considering only the "yes" questions.)
Suppose that, for every question q, there is some current odds offered by the world, described by society's accepted value y for P(q). Society sets odds to have an expected profit near zero. So the bet will cost the player C if q is false, and will give a payout D if q is true, such that y(D) = (1-y)C. Set D = 1-y, C = y.
Person i then takes the bet for each question q for which i thinks q is true, and Dpi > C(1-pi), or (1-y)pi > y(1-pi), 1/y - y/y > 1/p - p/p, y < p. As they are right with probability pi, they win pi of these bets.
Suppose y is distributed uniformly from zero to one. This is the most-suspicious assumption.
Person i's profit F(pi) is their payouts minus their losses. So F(pi) is the integral, from y = 0 to pi, of [(1-y)pi - y(1-pi) = pi - y]. This integral equals pi2 / 2.
Good! Your winnings should be proportional to the square of your accuracy!
But that doesn't seem to match what I observe.
Model 2: No re-evaluation of old beliefs
Now suppose that a person improves their accuracy pi over time, and assumes that pi is the accuracy for all their beliefs, but doesn't constantly re-evaluate all old beliefs. In this model, time t will be equal to pi (representing
|
d2c14a40-6f25-4396-85d6-7b1ae8333e47
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mental Abstractions
The Primacy of the Abstract is a talk given by the economist F. A. Hayek in 1969. It discusses the relationship between concrete and abstract mental phenomenon. In this post we summarize the main points made in his talk. Here are a few main points:
* Mental abstractions are the cause of concrete perceptions. In other words, abstractions can exist without concrete percepts, but concrete percepts can't exist without abstractions.
* Particular behavioral outputs are the result of many mental abstractions jointly constraining the organism to perform that behavior
* New mental abstractions cannot be designed in a top-down fashion, and are instead chosen from pre-existing abstractions that are found to be useful
* The unconscious is more abstracted than the conscious - it is super-conscious.
Introduction
The phrase primacy of the abstract sounds like a paradox. In a more standard view of the abstract (perhaps one emphasized in mathematics), the abstract arises by finding patterns and generalizing concrete examples. From this perspective the concrete is the primary phenomena from which the abstract is derived.
Hayek takes the contrasting viewpoint: what if those mental constructs (sensations, perceptions, images, etc.) that feel primary, are actually the result of multiple abstract classes "superimposing" together to select a single concrete mental object? Hayek explores the consequences of this framework, ultimately making a compelling case.
In Hayek's view the abstract is causally primary. The abstract comes before and ultimately creates the low-level percept. He says,
> in the mind the abstract can exist without the concrete, but not the concrete without the abstract.
This is certainly not how I tend to think about how my mind works. I imagine that I experience a vast array of particular sensory experiences, and the abstractions that I have developed allow me to classify and operate on them as abstract entities: couch, classical music, addition, etc. Fro
|
ec4a1bcf-6d45-4d59-a396-ba5713c80988
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Crowdsourcing Anki Decks
Cross-posting on EA Forum.
I think it could be useful if folks had a relatively low bar for sharing Anki decks they make, especially if they're for an EA/rationalist relevant book, an 80,000 Hours podcast episode, or a good or popular textbook (especially, if it's listed here!). I would love for the comments on the EA Forum version of this post became an ever expanding list of such decks.
Perhaps you could state:
* What book/podcast/etc. the deck is on
* I'd encourage you to have an Anki folder for the book/podcast/etc. but then have the decks themselves broken down by chapter/episode/article/etc.
* Link to deck
* Will you update the deck based on people's corrections?
* In your view, how comprehensive is it?
* In you view, what is it's quality?
* How much experience do you have making decks? (In years, books, or some other relevant metric you prefer.)
* Are you okay with others reviewing the deck, to help others?
|
0d4467ba-e543-46d4-94a3-5f2c8211f4cb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Saint Petersburg sunday meetup
Discussion article for the meetup : Saint Petersburg sunday meetup
WHEN: 16 February 2014 04:00:00PM (+0400)
WHERE: Санкт-Петербург, м. Технологический Институт, ул.1-я Красноармейская, дом 15
We most likely will be trying out Pandemia and biased board gaming.
Also I expect some discussions to arrise around topics of congitive biases, utility, or other zeitgeist stuff.
If you are russian lesswronger who sees one of my ads for the first time, please check out our newsletter or vk group: newsletter or http://vk.com/lw_spb for more detailed descriptions.
If you are a foreign guest in Saint Petersburg - we also would all be glad to see you and to meet you - at lest some of our attendees speak english.
Discussion article for the meetup : Saint Petersburg sunday meetup
|
8a51958a-d59b-41bf-8608-940fad7e7ea4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
People neglect small probability events
Over at overcomingbias Robin Hanson wrote:
> On September 9, 1713, so the story goes, Nicholas Bernoulli proposed the following problem in the theory of games of chance, after 1768 known as the St Petersburg paradox …:
>
> Peter tosses a coin and continues to do so until it should land heads when it comes to the ground. He agrees to give Paul one ducat if he gets heads on the very first throw, two ducats if he gets it on the second, four if on the third, eight if on the fourth, and so on, so that with each additional throw the number of ducats he must pay is doubled.
>
> Nicholas Bernoulli … suggested that more than five tosses of heads are morally impossible [and so ignored]. This proposition is experimentally tested through the elicitation of subjects‘ willingness-to-pay for various truncated versions of the Petersburg gamble that differ in the maximum payoff. … All gambles that involved probability levels smaller than 1/16 and maximum payoffs greater than 16 Euro elicited the same distribution of valuations. … The payoffs were as described …. but in Euros rather than in ducats. … The more senior students seemed to have a higher willingness-to-pay. … Offers increase significantly with income. (more)
>
> This isn’t plausibly explained by risk aversion, nor by a general neglect of possibilities with a <5% chance. I suspect this is more about analysis complexity, about limiting the number of possibilities we’ll consider at any one time. I also suspect this bodes ill for existential risk mitigation.
The title of the paper is 'Moral Impossibility in the Petersburg Paradox : A Literature Survey and Experimental Evidence' (PDF):
> The Petersburg paradox has led to much thought for three centuries. This
> paper describes the paradox, discusses its resolutions advanced in the
> literature while alluding to the historical context, and presents experimental
> data. In particular, Bernoulli’s search for the level of moral impossibility in
> the Petersburg problem is st
|
27613932-306b-47d4-a875-5a8583f7e14f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
COVID-19: List of ideas to reduce the direct harm from the virus, with an emphasis on unusual ideas
This is a collection of ideas on what interventions could be investigated, funded, or implemented to potentially reduce the harms from COVID-19 by intervening directly on the causal chain leading to infection and death. In particular, it has a lot of “unusual ideas” that haven’t gotten much attention yet.
Caveats: Most of the “unusual ideas” (see later section) were generated or vetted only quickly. Before implementation, they will definitely need further investigation, and some are only meant to serve as inspiration for better interventions. But capacity to implement them if they do work can generally be built in parallel with investigating whether they work. Further, it’s also possible that some things here could have consequences worse than the disease (stuff like surveillance states, long term harm from bad vaccinations, or dual use dangerous biotechnology). Therefore, the suggestion here is to pour a lot of resources into investigating those particularly risky options, rather than to immediately and unilaterally implement them.
As the surrounding context to keep in mind, we want to:
* Stop this pandemic in its tracks
* Minimize lives lost, economic disruption, psychological harm, and political harm done directly by the disease or our responses to it
* Perhaps, if lucky, find a way to make humanity better off afterwards
For some related research topics check out Coronavirus Research Ideas for EAs.
A general list of things for interfering with the causal chain of infection and death
In this section, I’ll list categories of interventions to reduce the harm from COVID-19. These should ideally be investigated and implemented in parallel at speed for something as disruptive as this virus. Some of these are being implemented by varying degrees in nations fighting the pandemic.
* Large scale prevention of exposure: Social distancing and managing packages and surfaces
* Small scale of prevention of exposure: Physically blocking the virus before it gets to t
|
34f497b4-4fa2-497b-859c-7b2171f36dac
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Linkpost] The Expressive Capacity of State Space Models: A Formal Language Perspective
Paper authors: Yash Sarrof, Yana Veitsman, Michael Hahn.
Context: architectures with weak forward passes can be differentially transparent; see e.g. this comment / the whole thread and research agendas like externalized reasoning or the translucent thought hypothesis.
Summary thread: https://x.com/yashYRS/status/1795340993757352402. Summary of the summary thread: like transformers, which have weak forward passes, SSMs are also in the TC0 computational complexity class, 'but cover distinct fragments within it'. 'SSMs can track hierarchical structures with optimal memory [...] suggesting that SSMs, while being more parallellizable, maintain sufficient power to handle the hierarchical structure of language.'
Abstract:
> Recently, recurrent models based on linear state space models (SSMs) have shown promising performance in language modeling (LM), competititve with transformers. However, there is little understanding of the in-principle abilities of such models, which could provide useful guidance to the search for better LM architectures. We present a comprehensive theoretical study of the capacity of such SSMs as it compares to that of transformers and traditional RNNs. We find that SSMs and transformers have overlapping but distinct strengths. In star-free state tracking, SSMs implement straightforward and exact solutions to problems that transformers struggle to represent exactly. They can also model bounded hierarchical structure with optimal memory even without simulating a stack. On the other hand, we identify a design choice in current SSMs that limits their expressive power. We discuss implications for SSM and LM research, and verify results empirically on a recent SSM, Mamba.
|
a6ce894e-ba2a-4010-b95d-8102cefd0078
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Alignment By Default
Suppose AI continues on its current trajectory: deep learning continues to get better as we throw more data and compute at it, researchers keep trying random architectures and using whatever seems to work well in practice. Do we end up with aligned AI “by default”?
I think there’s at least a plausible trajectory in which the answer is “yes”. Not very likely - I’d put it at ~10% chance - but plausible. In fact, there’s at least an argument to be made that alignment-by-default is *more* likely to work than many fancy alignment proposals, including [IRL variants](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc) and [HCH-family methods](https://www.lesswrong.com/s/EmDuGeRw749sD3GKd).
This post presents the rough models and arguments.
I’ll break it down into two main pieces:
* Will a sufficiently powerful unsupervised learner “learn human values”? What does that even mean?
* Will a supervised/reinforcement learner end up aligned to human values, given a bunch of data/feedback on what humans want?
Ultimately, we’ll consider a semi-supervised/transfer-learning style approach, where we first do some unsupervised learning and hopefully “learn human values” before starting the supervised/reinforcement part.
As background, I will assume you’ve read some of the core material about human values from [the sequences](https://www.lesswrong.com/rationality), including [Hidden Complexity of Wishes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes), [Value is Fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), and [Thou Art Godshatter](https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter).
Unsupervised: Pointing to Values
--------------------------------
In this section, we’ll talk about why an unsupervised learner might *not* “learn human values”. Since an unsupervised learner is generally just optimized for predictive power, we’ll start by asking whether theoretical algorithms with best-possible predictive power (i.e. Bayesian updates on low-level physics models) “learn human values”, and what that even means. Then, we’ll circle back to more realistic algorithms.
Consider a low-level physical model of some humans - e.g. a model which simulates every molecule comprising the humans. Does this model “know human values”? In one sense, yes: **the low-level model has everything there is to know about human values embedded within it, in exactly the same way that human values are embedded in physical humans**. It has “learned human values”, in a sense sufficient to predict any real-world observations involving human values.
But it seems like there’s a sense in which such a model does not “know” human values. Specifically, although human values are *embedded* in the low-level model, **the embedding itself is nontrivial**. Even if we have the whole low-level model, we still need that embedding in order to “point to” human values specifically - e.g. to use them as an optimization target. Indeed, when we say “point to human values”, what we mean is basically “specify the embedding”. (Side note: treating human values as an optimization target is not the only use-case for “pointing to human values”, and we still need to point to human values even if we’re not explicitly optimizing for anything. But that’s a [separate discussion](https://www.lesswrong.com/posts/42YykiTqtGMyJAjDM/alignment-as-translation), and imagining using values as an optimization target is useful to give a mental image of what we mean by “pointing”.)
**In short: predictive power alone is not sufficient to define human values. The missing part is the embedding of values within the model.** The hard part is pointing to the thing (i.e. specifying the values-embedding), not learning the thing (i.e. finding a model in which values are embedded).
Finally, here’s a different angle on the same argument which will probably drive some of the philosophers up in arms: *any* model of the real world with sufficiently high general predictive power will have a model of human values embedded within it. After all, it has to predict the parts of the world in which human values are embedded in the first place - i.e. the parts of which humans are composed, the parts on which human values are implemented. So in principle, it doesn’t even matter what kind of model we use or how it’s represented; as long the predictive power is good enough, values will be embedded in there, and the main problem will be finding the embedding.
Unsupervised: Natural Abstractions
----------------------------------
In this section, we’ll talk about how and why a large class of unsupervised methods might “learn the embedding” of human values, in a useful sense.
First, notice that **basically everything from the previous section still holds if we replace the phrase “human values” with “trees”**. A low-level physical model of a forest has everything there is to know about trees embedded within it, in exactly the same way that trees are embedded in the physical forest. However, while there are trees *embedded* in the low-level model, the embedding itself is nontrivial. Predictive power alone is not sufficient to define trees; the missing part is the embedding of trees within the model.
More generally, whenever we have some high-level abstract object (i.e. higher-level than quantum fields), like trees or human values, a low-level model might have the object embedded within it but not “know” the embedding.
Now for the interesting part: **empirically, we have whole classes of neural networks in which** [**concepts like “tree” have simple, identifiable embeddings**](https://arxiv.org/pdf/1811.10597.pdf). These are unsupervised systems, trained for predictive power, yet they apparently “learn the tree-embedding” in the sense that the embedding is simple: it’s just the activation of a particular neuron, a particular channel, or a specific direction in the activation-space of a few neurons.
*Neat example with “trees” from the paper linked above.*What’s going on here? We know that models optimized for predictive power will not have trivial tree-embeddings *in general*; low-level physics simulations demonstrate that much. Yet these neural networks *do* end up with trivial tree-embeddings, so presumably some special properties of the systems make this happen. But those properties can’t be *that* special, because we see the same thing for a reasonable variety of different architectures, datasets, etc.
Here’s what I think is happening: **“tree” is a natural abstraction**. More on what that means [here](https://www.lesswrong.com/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction), but briefly: abstractions summarize information which is relevant far away. When we summarize a bunch of atoms as “a tree”, we’re throwing away lots of information about the exact positions of molecules/cells within the tree, or about the pattern of bark on the tree’s surface. But information like the exact positions of molecules within the tree is irrelevant to things far away - that signal is all wiped out by the noise of air molecules between the tree and the observer. The flap of a butterfly’s wings may alter the trajectory of a hurricane, but unless we know how all wings of all butterflies are flapping, that tiny signal is wiped out by noise for purposes of our own predictions. Most information is irrelevant to things far away, not in the sense that there’s no causal connection, but in the sense that the signal is wiped out by noise in other unobserved variables.
If a concept is a *natural* abstraction, that means that the concept summarizes all the information which is relevant to anything far away, and isn’t too sensitive to the exact notion of “far away” involved. That’s what I think is going on with “tree”.
Getting back to neural networks: it’s easy to see why a broad range of architectures would end up “using” natural abstractions internally. Because the abstraction summarizes information which is relevant far away, it allows the system to make far-away predictions without passing around massive amounts of information all the time. In a low-level physics model, we don’t need abstractions because we *do* pass around massive amounts of information all the time, but real systems won’t have anywhere near that capacity any time soon. So for the foreseeable future, **we should expect to see real systems with strong predictive power using natural abstractions internally**.
With all that in mind, it’s time to drop the tree-metaphor and come back to human values. Are human values a natural abstraction?
If you’ve read [Value is Fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) or [Godshatter](https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter), then there’s probably a knee-jerk reaction to say “no”. Human values are basically a bunch of randomly-generated heuristics which proved useful for genetic fitness; why would they be a “natural” abstraction? But remember, the same can be said of trees. Trees are a complicated pile of [organic spaghetti code](https://www.lesswrong.com/posts/NQgWL7tvAPgN2LTLn/spaghetti-towers), but “tree” is still a natural abstraction, because the concept summarizes all the information from that organic spaghetti pile which is relevant to things far away. In particular, it summarizes anything about one tree which is relevant to far-away trees.
Similarly, the concept of “human” summarizes all the information about one human which is relevant to far-away humans. It’s a natural abstraction.
Now, I don’t think “human values” are a natural abstraction in exactly the same way as “tree” - specifically, trees are abstract objects, whereas human values are *properties* of certain abstract objects (namely humans). That said, I think it’s pretty obvious that “human” is a natural abstraction in the same way as “tree”, and I expect that **humans “have values” in roughly the same way that trees “have** [**branching patterns**](https://www.longdom.org/open-access/perspectives-of-branching-pattern-and-branching-density-in-30-woodytrees-and-shrubs-in-tamulipan-thornscrub-northeast-of-mexico-2168-9776-1000160.pdf)**”**. Specifically, the natural abstraction contains a bunch of information, that information approximately factors into subcomponents (including “branching pattern”), and “human values” is one of those information-subcomponents for humans.
*Branching patterns for a few different kinds of trees.*I wouldn’t put super-high confidence on all of this, but given the remarkable track record of hackish systems learning natural abstractions in practice, I’d give maybe a **70% chance that a broad class of systems (including neural networks) trained for predictive power end up with a simple embedding of human values**. A plurality of my uncertainty is on how to think about properties of natural abstractions. A significant chunk of uncertainty is also on the possibility that natural abstraction is the wrong way to think about the topic altogether, although in that case I’d still assign a reasonable chance that neural networks end up with simple embeddings of human values - after all, no matter how we frame it, they definitely have trivial embeddings of many other complicated high-level objects.
Aside: Microscope AI
--------------------
[Microscope AI](https://www.lesswrong.com/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) is about studying the structure of trained neural networks, and trying to directly understand their learned internal algorithms, models and concepts. In light of the previous section, there’s an obvious path to alignment where there turns out to be a few neurons (or at least some simple embedding) which correspond to human values, we use the tools of microscope AI to find that embedding, and just like that the alignment problem is basically solved.
Of course it’s unlikely to be that simple in practice, even assuming a simple embedding of human values. I don’t expect the embedding to be quite as simple as one neuron activation, and it might not be easy to recognize even if it were. Part of the problem is that we don’t even know the type signature of the thing we’re looking for - in other words, there are unanswered fundamental conceptual questions here, which make me less-than-confident that we’d be able to recognize the embedding even if it were right under our noses.
That said, this still seems like a reasonably-plausible outcome, and it’s an approach which is particularly well-suited to benefit from marginal theoretical progress.
One thing to keep in mind: this is still only about aligning *one* AI; success doesn’t necessarily mean a future in which more advanced AIs remain aligned. More on that later.
Supervised/Reinforcement: Proxy Problems
----------------------------------------
Suppose we collect some kind of data on what humans want, and train a system on that. The exact data and type of learning doesn’t really matter here; the relevant point is that any data-collection process is always, no matter what, at best a proxy for actual human values. That’s a problem, because [Goodhart’s Law](https://www.lesswrong.com/tag/goodhart-s-law) plus [Hidden Complexity of Wishes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes). You’ve probably heard this a hundred times already, so I won’t belabor it.
Here’s the interesting possibility: **assume the data is crap**. It’s so noisy that, even though the data-collection process is just a proxy for real values, the data is consistent with real human values. Visually:
*Real human values are represented by the blue point, and the true center of our proxy measure is the red point. In this case, the data generated (other points) is noisy enough that it’s consistent with real human values. Disclaimer: this is an analogy, I don’t actually imagine values and proxies being directly represented in the same space as the data.*At first glance, this isn’t much of an improvement. Sure, the data is *consistent* with human values, but it’s consistent with a bunch of other possibilities too - including the real data-collection process (which is exactly the proxy we wanted to avoid in the first place).
But now suppose we do some transfer learning. We start with a trained unsupervised learner, which already has a simple embedding of human values (we hope). We give our supervised learner access to that system during training. **Because the unsupervised learner has a simple embedding of human values, the supervised learner can easily score well by directly using that embedded human values model**. So, we cross our fingers and hope the supervised learner just directly uses that embedded human values model, and the data is noisy enough that it never “figures out” that it can get better performance by directly modelling the data-collection process instead.
In other words: **the system uses an actual model of human values as a proxy for our proxy of human values**.
This requires hitting a window - our data needs to be good enough that the system can tell it should use human values as a proxy, but bad enough that the system can’t figure out the specifics of the data-collection process enough to model it directly. This window may not even exist.
(Side note: we can easily adjust this whole story to a situation where we’re training for some task other than “satisfy human values”. In that case, the system would use the actual model of human values to model the Hidden Complexity of whatever task it’s training on.)
Of course in practice, the vast majority of the things people use as objectives for training AI probably wouldn’t work at all. I expect that they usually look like this:
In other words, most objectives are so bad that even a little bit of data is enough to distinguish the proxy from real human values. But if we assume that there’s some try-it-and-see going on, i.e. people try training on various objectives and keep the AIs which seem to do roughly what the humans want, then it’s *maybe* plausible that we end up iterating our way to training objectives which “work”. That’s assuming things don’t go irreversibly wrong before then - including not just hostile takeover, but even just development of deceptive behavior, since this scenario does not have any built-in mechanism to detect deception.
Overall, I’d give maybe a 10-20% chance of alignment by this path, *assuming* that the unsupervised system does end up with a simple embedding of human values. The main failure mode I’d expect, assuming we get the chance to iterate, is deception - not necessarily “intentional” deception, just the system being optimized to look like it’s working the way we want rather than actually working the way we want. It’s the proxy problem again, but this time at the level of humans-trying-things-and-seeing-if-they-work, rather than explicit training objectives.
Alignment in the Long Run
-------------------------
So far, we’ve only talked about one AI ending up aligned, or a handful ending up aligned at one particular time. However, that isn’t really the ultimate goal of AI alignment research. What we really want is for AI to remain aligned in the long run, as we (and AIs themselves) continue to build new and more powerful systems and/or scale up existing systems over time.
I know of two main ways to go from aligning one AI to long-term alignment:
* Make the alignment method/theory very reliable and robust to scale, so we can continue to use it over time as AI advances.
* Align one roughly-human-level-or-smarter AI, then use that AI to come up with better alignment methods/theories.
The alignment-by-default path relies on the latter. Even assuming alignment happens by default, it is unlikely to be highly reliable or robust to scale.
That’s scary. We’d be trusting the AI to align future AIs, without having any sure-fire way to know that the AI is itself aligned. (If we did have a sure-fire way to tell, then that would itself be most of a solution to the alignment problem.)
That said, there’s a bright side: when alignment-by-default works, it’s a best-case scenario. The AI has a basically-correct model of human values, and is pursuing those values. Contrast this to things like IRL variants, which *at best* learn a utility function which approximates human values (which are probably not themselves a utility function). Or the HCH family of methods, which *at best* mimic a human with a massive hierarchical bureaucracy at their command, and certainly won’t be any more aligned than that human+bureaucracy would be.
To the extent that alignment of the successor system is limited by alignment of the parent system, that makes alignment-by-default potentially a more promising prospect than IRL or HCH. In particular, it seems plausible that imperfect alignment gets amplified into worse-and-worse alignment as systems design their successors. For instance, a system which tries to look like it’s doing what humans want rather than actually doing what humans want will design a successor which has even better human-deception capabilities. That sort of problem makes “perfect” alignment - i.e. an AI actually pointed at a basically-correct model of human values - qualitatively safer than a system which only manages to be not-instantly-disastrous.
(Side note: this isn’t the only reason why “basically perfect” alignment matters, but I do think it’s the most relevant such argument for one-time alignment/short-term term methods, especially on not-very-superhuman AI.)
In short: **when alignment-by-default works, we can use the system to design a successor without worrying about amplification of alignment errors**. However, we wouldn’t be able to tell for sure whether alignment-by-default had worked or not, and it’s still possible that the AI would make plain old mistakes in designing its successor.
Conclusion
----------
Let’s recap the bold points:
* A low-level model of some humans has everything there is to know about human values embedded within it, in exactly the same way that human values are embedded in physical humans. The embedding, however, is nontrivial. Thus...
* Predictive power alone is not sufficient to define human values. The missing part is the embedding of values within the model. However…
* This also applies if we replace the phrase “human values” with “trees”. Yet we have a whole class of neural networks in which a simple embedding lights up in response to trees. Why?
* Trees are a natural abstraction, and we should expect to see real systems trained for predictive power use natural abstractions internally.
* Human values are a little different from trees (they’re a property of an abstract object rather than an abstract object themselves), but I still expect that a broad class of systems trained for predictive power will end up with simple embeddings of human values (~70% chance).
* Because the unsupervised learner has a simple embedding of human values, a supervised/reinforcement learner can easily score well on values-proxy-tasks by directly using that model of human values. In other words, the system uses an actual model of human values as a proxy for our proxy of human values (~10-20% chance).
* When alignment-by-default works, it’s basically a best-case scenario, so we can safely use the system to design a successor without worrying about amplification of alignment errors (among other things).
Overall, I only give this whole path ~10% chance of working in the short term, and maybe half that in the long term. However, *if* amplification of alignment errors turns out to be a major limiting factor for long-term alignment, then alignment-by-default is plausibly more likely to work than approaches in the IRL or HCH families.
The limiting factor here is mainly identifying the (probably simple) embedding of human values within a learned model, so microscope AI and general theory development are both good ways to improve the outlook. Also, in the event that we are able to identify a simple embedding of human values in a learned model, it would be useful to have a way to translate that embedding into new systems, in order to align successors.
|
cecf7f6a-9593-4f6c-a0b4-d65c516269f3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Laws of Large Numbers
Introduction
In this short post we'll discuss fine-grained variants of the law of large numbers beyond the central limit theorem. In particular we'll introduce cumulants as a crucial (and very nice) invariant of probability distributions to track. We'll also briefly discusses parallels with physics. This post should be interesting on its own, but the reason I'm writing it is that this story contains a central idea for (one point of view) on a certain exciting physics-inspired point of view on neural nets. While the point of view has so far been explained in somewhat sophisticated physics language (involving quantum fields and Feynman diagrams), the main points can be explained without any physics background, purely in terms of statistics. Introducing this "more elementary" view on the subject is one of the core goals of this series of posts. This first post is purely introductory, and other than "ideological" parallels, it has essentially no physics (only statistics).
Review of the central limit theorem (as a law of large numbers correction)
The law of large numbers
Most people intuitively know the law of large numbers: that if you take n independent measurements X1,X2,…,Xn in a random process and average them, this will converge to a specific number as n goes to infinity, namely the expectation of this process, [X].
The law of large numbers can be split up into two parts, the first of which doesn’t depend on n going to infinity. Namely:
1. The cumulative property of means, which itself consists of two parts:
1. Additivity of means. The expectation of the sum of n random variables, [X1+⋯+Xn], is equal to the sum of their expectations: [X1]+[X2]+…+[Xn]. Here the variables don’t even have to be independent[1] or drawn from the same distribution.
2. Linear homogeneity of means. For any real constant λ, the expectation of the rescaled variable [λX] is equal to the rescaled expectation λ[X].
2. Existence of exact limit of averages. The average of n indep
|
8e5d9ca9-5996-4176-8150-48515e5a9b3b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
arch-anarchist reading list
This is my reading list for arch-anarchists that, although they no directly support arch-anarchy, are compatible with its ideas.
1. The Making of a Small World: a similar satirical work of fiction to Nick Bostrom's The Fable of the Tyrant-Dragon, which I have already discussed in my previous article. It also criticizes complacency with problems accepted as "natural". Here is the link: https://maxmore.substack.com/p/the-making-of-a-small-world
2. SANS CEILING HYPOTHESIS: a theory that states that are no upper limit to what sufficiently advanced intelligent life can do not being limited by fundamental limits set by physical law . the theory was proposed by the independent blogger Paul Hughes , Unfortunately the original article has been lost, that said Everything2 has a short article covering the theory and his page on the Lifeboat Foundation also has a direct quote from the article. https://everything2.com/title/Sans+Ceiling+Hypothesis , https://russian.lifeboat.com/ex/bios.paul.hughes
3. the category Microdimensional mastery in the kardashev scale wiki: the Kardashev Scale wiki is a wiki inspired by the Kardashev scale (a method of measuring the level of technological advancement of a civilization, based on the total energy use of a civilization) that uses real science, theoretical science, hypothetical science and futurism, to discuss the capabilities of civilizations within the scale, also using works of fiction to exemplify the capabilities. I have already favored the Microdimensional mastery in my article " Arch-anarchy:Theory and practice" as a better alternative than the kardashev scale, fortunately they also have articles about the Microdimensional mastery with similar speculations as I used in my article, I admit they made better speculations than mine . I especially recommend. Here is the link: https://kardashev.fandom.com/wiki/Category:Microdimensional (While I must say I find some of their timelines for certain technologies in the more dis
|
8a574221-8c12-4e00-bbb9-c0022813c130
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study
1 Introduction
---------------
Deep reinforcement learning (RL) has seen increased interest in recent years due to its ability to have neural-network-based agents learn to act in environments through interactions.
For continuous control tasks, on-policy algorithms such as REINFORCE [[2](#bib.bibx2)], TRPO [[10](#bib.bibx10)], A3C [[14](#bib.bibx14)], PPO [[17](#bib.bibx17)] and off-policy algorithms such as DDPG [[13](#bib.bibx13)] and SAC [[21](#bib.bibx21)] have enabled successful applications such as quadrupedal
locomotion [[20](#bib.bibx20)], self-driving [[30](#bib.bibx30)] or dexterous in-hand manipulation [[20](#bib.bibx20), [25](#bib.bibx25), [32](#bib.bibx32)].
Many of these papers investigate in depth different loss functions and learning paradigms.
Yet, it is less visible that behind successful experiments in deep RL there are complicated code bases that contain a large number of low- and high-level design decisions that are usually not discussed in research papers.
While one may assume that such “choices” do not matter, there is some evidence that they are in fact crucial for or even driving good performance [[27](#bib.bibx27)].
While there are open-source implementations available that can be used by practitioners, this is still unsatisfactory:
In research publications, often different algorithms implemented in different code bases are compared one-to-one.
This makes it impossible to assess whether improvements are due to the algorithms or due to their implementations.
Furthermore, without an understanding of lower-level choices, it is hard to assess the performance of high-level algorithmic choices as performance may strongly depend on the tuning of hyperparameters and implementation-level details.
Overall, this makes it hard to attribute progress in RL and slows down further research
[[22](#bib.bibx22), [27](#bib.bibx27), [15](#bib.bibx15)].
Our contributions.
Our key goal in this paper is to investigate such lower level choices in depth and to understand their impact on final agent performance.
Hence, as our key contributions, we (1) implement >50 choices in a unified on-policy algorithm implementation,
(2) conducted a large-scale (more than 250’000 agents trained) experimental study that covers different aspects of the training process,
and (3) analyze the experimental results to provide practical insights and recommendations for the on-policy training of RL agents.
Most surprising finding.
While many of our experimental findings confirm common RL practices,
some of them are quite surprising, e.g.
the policy initialization scheme significantly influences the
performance while it is rarely even mentioned in RL publications.
In particular, we have found that initializing the network
so that the initial action distribution has zero mean, a rather low standard deviation
and is independent of the observation significantly improves the training speed
(Sec. [3.2](#S3.SS2 "3.2 Networks architecture (based on the results in Appendix E) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
The rest of of this paper is structured as follows:
We describe our experimental setup and performance metrics used in Sec. [2](#S2 "2 Study design ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study").
Then, in Sec. [3](#S3 "3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") we present and analyse the experimental results and
finish with related work in Sec. [4](#S4 "4 Related Work ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") and conclusions in Sec. [5](#S5 "5 Conclusions ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study").
The appendices contain the detailed description of all design choices we experiment with (App. [B](#A2 "Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
default hyperparameters (App. [C](#A3 "Appendix C Default settings for experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
and the raw experimental results (App. [D](#A4 "Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") - [K](#A11 "Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
2 Study design
---------------
#### Considered setting.
In this paper, we consider the setting of *on-policy reinforcement learning for continuous control*.
We define on-policy learning in the following loose sense: We consider policy iteration algorithms that iterate between generating experience using the current policy and using the experience to improve the policy.
This is the standard *modus operandi* of algorithms usually considered on-policy such as PPO [[17](#bib.bibx17)].
However, we note that algorithms often perform several model updates and thus may operate technically on off-policy data within a single policy improvement iteration.
As benchmark environments, we consider five widely used continuous control environments from OpenAI Gym [[12](#bib.bibx12)] of varying complexity: Hopper-v1, Walker2d-v1, HalfCheetah-v1, Ant-v1, and Humanoid-v1 111
It has been noticed that the version of the Mujoco physics simulator [[5](#bib.bibx5)]
can slightly influence the behaviour of some of the environments — <https://github.com/openai/gym/issues/1541>. We used Mujoco 2.0
in our experiments..
#### Unified on-policy learning algorithm.
We took the following approach to create a highly configurable unified on-policy learning algorithm with as many choices as possible:
1. 1.
We researched prior work and popular code bases to make a list of commonly used choices, i.e., different loss functions (both for value functions and policies), architectural choices such as initialization methods, heuristic tricks such as gradient clipping and all their corresponding hyperparameters.
2. 2.
Based on this, we implemented a single, unified on-policy agent and corresponding training protocol starting from the SEED RL code base [[28](#bib.bibx28)].
Whenever we were faced with implementation decisions that required us to take decisions that could not be clearly motivated or had alternative solutions, we further added such decisions as additional choices.
3. 3.
We verified that when all choices are selected as in the PPO implementation
from OpenAI baselines, we obtain similar performance as reported in the PPO paper [[17](#bib.bibx17)].
We chose PPO because it is probably the most commonly used on-policy RL algorithm at the moment.
The resulting agent implementation is detailed in Appendix [B](#A2 "Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study").
The key property is that the implementation exposes all choices as configuration options in an unified manner.
For convenience, we mark each of the choice in this paper with a number (e.g., C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and a fixed name (e.g. num\_envs (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))) that can be easily used to find a description of the choice in Appendix [B](#A2 "Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study").
#### Difficulty of investigating choices.
The primary goal of this paper is to understand how the different choices affect the final performance of an agent and to derive recommendations for these choices.
There are two key reasons why this is challenging:
First, we are mainly interested in insights on choices for good hyperparameter configurations. Yet,
if all choices are sampled randomly, the performance is very bad and
little (if any) training progress is made.
This may be explained by the presence of sub-optimal settings (e.g., hyperparameters of the wrong scale) that prohibit learning at all.
If there are many choices, the probability of such failure increases exponentially.
Second, many choices may have strong interactions with other related choices, for example the learning rate and the minibatch size.
This means that such choices need to be tuned together and experiments where only a single choice is varied but interacting choices are kept fixed may be misleading.
#### Basic experimental design.
To address these issues, we design a series of experiments as follows:
We create groups of choices around thematic groups where we suspect interactions between different choices, for example we group together all choices related to neural network architecture.
We also include Adam learning rate (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) in all of the groups as we
suspect that it may interact with many other choices.
Then, in each experiment, we train a large number of models where we randomly sample
the choices within the corresponding group 222Exact details for the different experiments are provided in Appendices [D](#A4 "Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") - [K](#A11 "Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")..
All other settings (for choices not in the group) are set to settings of a competitive base configuration (detailed in Appendix [C](#A3 "Appendix C Default settings for experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) that is close to the default PPOv2 configuration333<https://github.com/openai/baselines/blob/master/baselines/ppo2/defaults.py>
scaled up to 256256256256 parallel environments.
This has two effects:
First, it ensures that our set of trained models contains good models (as verified by performance statistics in the corresponding results).
Second, it guarantees that we have models that have different combinations of potentially interacting choices.
We then consider two different analyses for each choice (e.g, for advantage\_estimator (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))):
*Conditional 95th percentile*: For each potential value of that choice (e.g., advantage\_estimator (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) = N-Step), we look at the performance distribution of sampled configurations with that value.
We report the 95th percentile of the performance as well as a confidence interval based on a binomial approximation 444We compute confidence intervals with a significance level of α=5%𝛼percent5\alpha=5\%italic\_α = 5 % as follows: We find il=icdf(α2)subscript𝑖𝑙𝑖𝑐𝑑𝑓𝛼2i\_{l}=icdf\left(\frac{\alpha}{2}\right)italic\_i start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT = italic\_i italic\_c italic\_d italic\_f ( divide start\_ARG italic\_α end\_ARG start\_ARG 2 end\_ARG ) and ih=icdf(1−α2)subscript𝑖ℎ𝑖𝑐𝑑𝑓1𝛼2i\_{h}=icdf\left(1-\frac{\alpha}{2}\right)italic\_i start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = italic\_i italic\_c italic\_d italic\_f ( 1 - divide start\_ARG italic\_α end\_ARG start\_ARG 2 end\_ARG ) where icdf𝑖𝑐𝑑𝑓icdfitalic\_i italic\_c italic\_d italic\_f is the inverse cumulative density function of a binomial distribution with p=0.95𝑝0.95p=0.95italic\_p = 0.95 (as we consider the 95th percentile) and the number of draws equals the number of samples. We then report the ilsubscript𝑖𝑙i\_{l}italic\_i start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPTth and ihsubscript𝑖ℎi\_{h}italic\_i start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPTth highest scores as the confidence interval..
Intuitively, this corresponds to a robust estimate of the performance one can expect if all other choices in the group were tuned with random search and a limited budget of roughly 20 hyperparameter configurations.
*Distribution of choice within top 5% configurations.*
We further consider for each choice the distribution of values among the top 5%
configurations trained in that experiment.
The reasoning is as follows:
By design of the experiment, values for each choice are distributed uniformly at random.
Thus, if certain values are over-represented in the top models, this indicates that the specific choice is important in guaranteeing good performance.
#### Performance measures.
We employ the following way to compute performance:
For each hyperparameter configuration, we train 3333 models with independent random seeds where each model is trained for one million (Hopper, HalfCheetah, Walker2d) or two million environment steps (Ant, Humanoid).
We evaluate trained policies every hundred thousand steps by freezing the policy and computing the average undiscounted episode return of 100 episodes (with the stochastic policy).
We then average these score to obtain a single performance score of the seed
which is proportional to the area under the learning curve.
This ensures we assign higher scores to agents that learn quickly.
The performance score of a hyperparameter configuration is finally set to the median performance score across the 3 seeds.
This reduces the impact of training noise, i.e., that certain seeds of the same configuration may train much better than others.
3 Experiments
--------------
We run experiments for eight thematic groups:
*Policy Losses* (Sec. [3.1](#S3.SS1 "3.1 Policy losses (based on the results in Appendix D) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
*Networks architecture* (Sec. [3.2](#S3.SS2 "3.2 Networks architecture (based on the results in Appendix E) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
*Normalization and clipping* (Sec. [3.3](#S3.SS3 "3.3 Normalization and clipping (based on the results in Appendix F) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
*Advantage Estimation* (Sec. [3.4](#S3.SS4 "3.4 Advantage Estimation (based on the results in Appendix G) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
*Training setup* (Sec. [3.5](#S3.SS5 "3.5 Training setup (based on the results in Appendix H) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
*Timesteps handling* (Sec. [3.6](#S3.SS6 "3.6 Timesteps handling (based on the results in Appendix I) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
*Optimizers* (Sec. [3.7](#S3.SS7 "3.7 Optimizers (based on the results in Appendix J) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
and *Regularization* (Sec. [3.8](#S3.SS8 "3.8 Regularization (based on the results in Appendix K) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
For each group, we provide a full experimental design and full experimental plots in Appendices [D](#A4 "Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") - [K](#A11 "Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") so that the reader can draw their own conclusions from the experimental results.
In the following sections, we provide short descriptions of the experiments, our interpretation of the results, as well as practical recommendations for on-policy training for continuous control.
###
3.1 Policy losses (based on the results in Appendix [D](#A4 "Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description.
We investigate different policy losses (C[B.3](#A2.SS3 "B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")): vanilla policy gradient (PG), V-trace [[19](#bib.bibx19)], PPO [[17](#bib.bibx17)], AWR [[33](#bib.bibx33)], V-MPO555
We used the V-MPO policy loss without the decoupled KL constraint as we investigate
the effects of different policy regularizers separately in Sec. [3.8](#S3.SS8 "3.8 Regularization (based on the results in Appendix K) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"). [[34](#bib.bibx34)] and the limiting case of AWR (β→0→𝛽0\beta\rightarrow 0italic\_β → 0) and V-MPO (ϵn→0→subscriptitalic-ϵ𝑛0\epsilon\_{n}\rightarrow 0italic\_ϵ start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT → 0)
which we call Repeat Positive Advantages (RPA) as it is equivalent to
the negative log-probability of actions with positive advantages.
See App. [B.3](#A2.SS3 "B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") for a detailed description of the different losses.
We further sweep the hyperparameters of each of the losses (C[• ‣ B.3](#A2.I3.i2 "2nd item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.3](#A2.I3.i3 "3rd item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.3](#A2.I3.i4 "4th item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.3](#A2.I3.i4 "4th item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.3](#A2.I3.i5 "5th item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
the learning rate (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and the number of passes over the data (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
The goal of this study is to better understand the importance of the policy loss function in the on-policy setting considered in this paper.
The goal is not to provide a general statement that one of the losses is better than the others as some of them were specifically designed for other settings
(e.g., the V-trace loss is targeted at near-on-policy data in a distributed setting).

Figure 1: Comparison of different policy losses (C[B.3](#A2.SS3 "B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Interpretation.
Fig. [1](#S3.F1 "Figure 1 ‣ 3.1 Policy losses (based on the results in Appendix D) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") shows the 95-th percentile of the average policy score during training for different policy losses (C[B.3](#A2.SS3 "B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
We observe that PPO performs better than the other losses on 4 out of 5 environments and is
one of the top performing losses on HalfCheetah.
As we randomly sample the loss specific hyperparameters in this analysis, one might argue that our approach favours choices that are not too sensitive to hyperparameters.
At the same time, there might be losses that are sensitive to their hyperparameters but for which good settings may be easily found.
Fig. [5](#A4.F5 "Figure 5 ‣ D.2 Results ‣ Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") shows that even if we condition on choosing the optimal
loss hyperparameters for each loss666AWR loss has two hyperparameters — the temperature β𝛽\betaitalic\_β (C[• ‣ B.3](#A2.I3.i4 "4th item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and the weight clipping coefficient ω𝚖𝚊𝚡subscript𝜔𝚖𝚊𝚡\omega\_{\texttt{max}}italic\_ω start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT (C[• ‣ B.3](#A2.I3.i4 "4th item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")). We only condition on β𝛽\betaitalic\_β which is more important.,
PPO still outperforms the other losses on the two hardest tasks — Humanoid and Ant777
These two tasks were not included in the original PPO paper [[17](#bib.bibx17)] so the hyperparameters we use were not tuned for them. and is one of the top
performing losses on the other 3333 tasks.
Moreover, we show the empirical cumulative density functions of agent performance conditioned on the
policy loss used in Fig. [4](#A4.F4 "Figure 4 ‣ D.2 Results ‣ Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study").
Perhaps unsurprisingly, PG and V-trace perform worse on all tasks.
This is likely caused by their inability to handle data that become off-policy in one iteration, either due to multiple passes (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) over experience (which can be seen in Fig. [14](#A4.F14 "Figure 14 ‣ D.2 Results ‣ Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) or a large experience buffer (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) in relation to the batch size (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Overall, these results show that trust-region optimization (preventing the current policy from diverging too much from the behavioral one) which is present in all the other policy losses is crucial for good sample complexity.
For PPO and its clipping threshold ϵitalic-ϵ\epsilonitalic\_ϵ (C[• ‣ B.3](#A2.I3.i3 "3rd item ‣ B.3 Policy losses ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), we further observe that ϵ=0.2italic-ϵ0.2\epsilon=0.2italic\_ϵ = 0.2 and ϵ=0.3italic-ϵ0.3\epsilon=0.3italic\_ϵ = 0.3 perform reasonably well in all environments but that lower (ϵ=0.1italic-ϵ0.1\epsilon=0.1italic\_ϵ = 0.1) or higher (ϵ=0.5italic-ϵ0.5\epsilon=0.5italic\_ϵ = 0.5) values give better performance on some of the environments (See Fig. [10](#A4.F10 "Figure 10 ‣ D.2 Results ‣ Appendix D Experiment Policy Losses ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") and Fig. [32](#A6.F32 "Figure 32 ‣ F.2 Results ‣ Appendix F Experiment Normalization and clipping ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Recommendation. Use the PPO policy loss. Start with the clipping threshold
set to 0.250.250.250.25 but also try lower and higher values if possible.
###
3.2 Networks architecture (based on the results in Appendix [E](#A5 "Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description.
We investigate the impact of differences in the policy and value function neural network architectures.
We consider choices related to the network structure and size (C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), activation functions (C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), and initialization of network weights (C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
We further include choices related to the standard deviation of actions (C[• ‣ B.8](#A2.I6.i1 "1st item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.8](#A2.I6.i2 "2nd item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.8](#A2.I6.i3 "3rd item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.8](#A2.I6.i4 "4th item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and transformations of sampled actions (C[• ‣ B.8](#A2.I6.i5 "5th item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Interpretation.
Separate value and policy networks (C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) appear to lead to better performance on four out of five environments (Fig. [15](#A5.F15 "Figure 15 ‣ E.1 Design ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
To avoid analyzing the other choices based on bad models, we thus focus for the rest of this experiment only on agents with separate value and policy networks.
Regarding network sizes, the optimal width of the policy MLP depends on the complexity of the environment (Fig. [18](#A5.F18 "Figure 18 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and too low or too high values can cause significant drop in performance while for the value function there seems to be no downside in using wider networks (Fig. [21](#A5.F21 "Figure 21 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Moreover, on some environments it is beneficial to make the value network wider than the policy one, e.g. on HalfCheetah the best results are achieved with 16−32163216-3216 - 32 units per layer in the policy network and 256256256256 in the value network.
Two hidden layers appear to work well for policy (Fig. [22](#A5.F22 "Figure 22 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and value networks (Fig. [20](#A5.F20 "Figure 20 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) in all tested environments.
As for activation functions, we observe that tanh activations
perform best and relu worst. (Fig. [30](#A5.F30 "Figure 30 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Interestingly, the initial policy appears to have a surprisingly high impact on the training performance.
The key recipe appears is to initialize the policy at the beginning of training so that the action distribution is centered around 00888All environments expect normalized actions in [−1,1]11[-1,1][ - 1 , 1 ]. regardless of the observation and has a rather small standard deviation.
This can be achieved by initializing the policy MLP with smaller weights in the last layer (C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), Fig. [24](#A5.F24 "Figure 24 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), this alone boosts the performance on Humanoid by 66%)
so that the initial action distribution is almost independent of the observation and by introducing an offset in the standard deviation of actions (C[• ‣ B.8](#A2.I6.i3 "3rd item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Fig. [2](#S3.F2 "Figure 2 ‣ 3.2 Networks architecture (based on the results in Appendix E) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") shows that the performance is very sensitive to the initial action standard deviation
with 0.5 performing best on all environments except Hopper where higher values perform better.

Figure 2: Comparison of different initial standard deviations of actions (C[• ‣ B.8](#A2.I6.i3 "3rd item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Fig. [17](#A5.F17 "Figure 17 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") compares two approaches to transform unbounded sampled actions into the bounded [−1, 1]11[-1,\,1][ - 1 , 1 ] domain expected by the environment (C[• ‣ B.8](#A2.I6.i5 "5th item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")): clipping and applying a tanh function.
tanh performs slightly better overall (in particular it improves the performance on HalfCheetah by 30%percent3030\%30 % ). Comparing Fig. [17](#A5.F17 "Figure 17 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")
and Fig. [2](#S3.F2 "Figure 2 ‣ 3.2 Networks architecture (based on the results in Appendix E) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") suggests that the difference might be mostly caused by the decreased magnitude
of initial actions999tanh can also potentially perform better with entropy regularization
(not used in this experiment) as it bounds the maximum possible policy entropy..
Other choices appear to be less important:
The scale of the last layer initialization matters much less for the value MLP (C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) than for the policy MLP
(Fig. [19](#A5.F19 "Figure 19 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Apart from the last layer scaling, the network initialization scheme (C[B.7](#A2.SS7 "B.7 Neural network architecture ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) does not matter too much (Fig. [27](#A5.F27 "Figure 27 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Only he\_normal and he\_uniform [[7](#bib.bibx7)] appear to be suboptimal choices with the other options performing very similarly.
There also appears to be no clear benefits if the standard deviation of the policy is learned for each state (i.e. outputted by the policy network) or once globally for all states (C[• ‣ B.8](#A2.I6.i1 "1st item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), Fig. [23](#A5.F23 "Figure 23 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
For the transformation of policy output into action standard deviation (C[• ‣ B.8](#A2.I6.i2 "2nd item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), softplus and exponentiation perform very similarly101010We noticed that some
of the training runs with exponentiation resulted in NaNs but clipping
the exponent solves this issue (See Sec. [B.8](#A2.SS8 "B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") for the details).
(Fig. [25](#A5.F25 "Figure 25 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Finally, the minimum action standard deviation (C[• ‣ B.8](#A2.I6.i4 "4th item ‣ B.8 Action distribution parameterization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) seems to matter little, if it is not set too large (Fig. [30](#A5.F30 "Figure 30 ‣ E.2 Results ‣ Appendix E Experiment Networks architecture ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Recommendation.
Initialize the last policy layer with 100×100\times100 × smaller weights.
Use softplus to transform network output into action standard deviation and add a (negative) offset to its input to decrease the initial standard deviation of actions.
Tune this offset if possible.
Use tanh both as the activation function (if the networks are not too deep) and to transform the samples from the normal distribution to the bounded action space.
Use a wide value MLP (no layers shared with the policy) but tune the policy width (it might need to be narrower than the value MLP).
###
3.3 Normalization and clipping (based on the results in Appendix [F](#A6 "Appendix F Experiment Normalization and clipping ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description.
We investigate the impact of different normalization techniques:
observation normalization (C[• ‣ B.9](#A2.I8.i1 "1st item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
value function normalization (C[• ‣ B.9](#A2.I8.i2 "2nd item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
per-minibatch advantage normalization (C[• ‣ B.9](#A2.I8.i3 "3rd item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
as well as gradient (C[• ‣ B.9](#A2.I8.i4 "4th item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and
observation (C[• ‣ B.9](#A2.I8.i1 "1st item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) clipping.
Interpretation.
Input normalization (C[• ‣ B.9](#A2.I8.i1 "1st item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) is crucial for good performance on
all environments apart from Hopper (Fig. [33](#A6.F33 "Figure 33 ‣ F.2 Results ‣ Appendix F Experiment Normalization and clipping ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Quite surprisingly, value function normalization (C[• ‣ B.9](#A2.I8.i2 "2nd item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) also influences the performance
very strongly — it is crucial for good performance on HalfCheetah and Humanoid,
helps slightly on Hopper and Ant and significantly *hurts* the performance on Walker2d (Fig. [37](#A6.F37 "Figure 37 ‣ F.2 Results ‣ Appendix F Experiment Normalization and clipping ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
We are not sure why the value function scale matters that much but suspect that
it affects the performance by changing the speed of the value function fitting.111111
Another explanation could be the interaction between the value function normalization
and PPO-style value clipping (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
We have, however, disable the value clipping in this experiment to avoid
this interaction. The disabling of the value clipping could
also explain why our conclusions are different from [[27](#bib.bibx27)]
where a form of value normalization improved the performance on Walker.
In contrast to observation and value function normalization, per-minibatch
advantage normalization (C[• ‣ B.9](#A2.I8.i3 "3rd item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) seems not to affect the performance too much
(Fig. [35](#A6.F35 "Figure 35 ‣ F.2 Results ‣ Appendix F Experiment Normalization and clipping ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Similarly, we have found little evidence that clipping normalized121212We only applied clipping if input normalization was enabled. observations (C[• ‣ B.9](#A2.I8.i1 "1st item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
helps (Fig. [38](#A6.F38 "Figure 38 ‣ F.2 Results ‣ Appendix F Experiment Normalization and clipping ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) but it might be worth
using if there is a risk of extremely high observations due to simulator divergence.
Finally, gradient clipping (C[• ‣ B.9](#A2.I8.i4 "4th item ‣ B.9 Data normalization and clipping ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) provides a small performance boost
with the exact clipping threshold making little difference
(Fig. [34](#A6.F34 "Figure 34 ‣ F.2 Results ‣ Appendix F Experiment Normalization and clipping ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Recommendation. Always use observation normalization and
check if value function normalization improves performance.
Gradient clipping might slightly help
but is of secondary importance.
###
3.4 Advantage Estimation (based on the results in Appendix [G](#A7 "Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description.
We compare the most commonly used advantage
estimators (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")):
N-step [[3](#bib.bibx3)], GAE [[9](#bib.bibx9)] and V-trace [[19](#bib.bibx19)]
and their hyperparameters (C[• ‣ B.2](#A2.I2.i1 "1st item ‣ B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.2](#A2.I2.i2 "2nd item ‣ B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"),
C[• ‣ B.2](#A2.I2.i3 "3rd item ‣ B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.2](#A2.I2.i3 "3rd item ‣ B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
We also experiment with applying PPO-style pessimistic clipping (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
to the value loss (present in the original PPO implementation but not mentioned
in the PPO paper [[17](#bib.bibx17)]) and using Huber loss [[1](#bib.bibx1)] instead of MSE for value learning (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Moreover, we varied the number of parallel environments used (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) as it changes the length of the experience fragments collected in each step.
Interpretation.
GAE and V-trace appear to perform better than N-step returns
(Fig. [44](#A7.F44 "Figure 44 ‣ G.2 Results ‣ Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") and [40](#A7.F40 "Figure 40 ‣ G.2 Results ‣ Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
which indicates that it is beneficial to combine the value estimators from multiple
timesteps.
We have not found a significant performance difference between GAE and V-trace
in our experiments.
λ=0.9𝜆0.9\lambda=0.9italic\_λ = 0.9 (C[• ‣ B.2](#A2.I2.i2 "2nd item ‣ B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[• ‣ B.2](#A2.I2.i3 "3rd item ‣ B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
performed well regardless of whether GAE
(Fig. [45](#A7.F45 "Figure 45 ‣ G.2 Results ‣ Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
or V-trace
(Fig. [49](#A7.F49 "Figure 49 ‣ G.2 Results ‣ Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
was used on all tasks but tuning this value per environment may
lead to modest performance gains.
We have found that PPO-style value loss clipping (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
hurts the performance regardless of the clipping threshold131313
This is consistent with prior work [[27](#bib.bibx27)].
(Fig. [43](#A7.F43 "Figure 43 ‣ G.2 Results ‣ Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Similarly, the Huber loss (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) performed worse than MSE in all environments
(Fig. [42](#A7.F42 "Figure 42 ‣ G.2 Results ‣ Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
regardless of the value of the threshold (C[B.2](#A2.SS2 "B.2 Advantage estimation ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) used
(Fig. [48](#A7.F48 "Figure 48 ‣ G.2 Results ‣ Appendix G Experiment Advantage Estimation ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Recommendation. Use GAE with λ=0.9𝜆0.9\lambda=0.9italic\_λ = 0.9 but neither Huber loss nor PPO-style value loss clipping.
###
3.5 Training setup (based on the results in Appendix [H](#A8 "Appendix H Experiment Training setup ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description.
We investigate choices related to the data collection and minibatch handling: the number of parallel
environments used (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
the number of transitions gathered in each iteration
(C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), the number of passes over the data (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
minibatch size (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and how the data is split into minibatches (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
For the last choice, in addition to standard choices, we also consider a new small modification of the original PPO approach:
The original PPO implementation splits the data in each policy iteration step into individual transitions and then randomly assigns them to minibatches (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
This makes it impossible to compute advantages as the temporal structure is broken. Therefore, the advantages are computed once at the beginning of each policy iteration step and then used in minibatch policy and value function optimization. This results in higher diversity of data in each minibatch at the cost of using slightly stale advantage estimations.
As a remedy to this problem, we propose to recompute the advantages at the beginning of each pass over the data instead of just once per iteration.
Results.
Unsurprisingly, going over the experience multiple times appears to be crucial for good sample complexity
(Fig. [54](#A8.F54 "Figure 54 ‣ H.2 Results ‣ Appendix H Experiment Training setup ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Often, this is computationally cheap due to the simple models considered, in particular on machines with accelerators such as GPUs and TPUs.
As we increase the number of parallel environments (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
performance decreases sharply on some of the environments (Fig. [55](#A8.F55 "Figure 55 ‣ H.2 Results ‣ Appendix H Experiment Training setup ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")). This is likely caused by shortened experience chunks (See Sec. [B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")
for the detailed description of the data collection process) and
earlier value bootstrapping.
Despite that, training with more environments usually leads to faster
training in wall-clock time if enough CPU cores are available.
Increasing the batch size (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) does not appear to hurt the sample complexity
in the range we tested (Fig. [57](#A8.F57 "Figure 57 ‣ H.2 Results ‣ Appendix H Experiment Training setup ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) which suggests that it should be increased for faster iteration speed.
On the other hand, the number of transitions gathered in each iteration (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
influences the performance quite significantly
(Fig. [52](#A8.F52 "Figure 52 ‣ H.2 Results ‣ Appendix H Experiment Training setup ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Finally, we compare different ways to handle minibatches (See Sec. [B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")
for the detailed description of different variants) in Fig. [53](#A8.F53 "Figure 53 ‣ H.2 Results ‣ Appendix H Experiment Training setup ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") and
[58](#A8.F58 "Figure 58 ‣ H.2 Results ‣ Appendix H Experiment Training setup ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study").
The plots suggest that stale advantages can in fact hurt performance
and that recomputing them at the beginning of each pass at least partially
mitigates the problem and performs best among all variants.
Recommendation. Go over experience multiple times.
Shuffle individual transitions before assigning them to minibatches
and recompute advantages once per data pass (See App. [B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") for the details).
For faster wall-clock time training use many parallel environments
and increase the batch size (both might hurt the sample complexity).
Tune the number of transitions in each iteration (C[B.1](#A2.SS1 "B.1 Data collection and optimization loop ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
if possible.
###
3.6 Timesteps handling (based on the results in Appendix [I](#A9 "Appendix I Experiment Time ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description. We investigate choices related to the handling of timesteps: discount factor141414While the discount factor is sometimes treated as a part of the environment, we assume that the real goal
is to maximize *undiscounted* returns and the discount factor is a part of the algorithm which makes learning easier. (C[B.4](#A2.SS4 "B.4 Handling of timesteps ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), frame skip (C[B.4](#A2.SS4 "B.4 Handling of timesteps ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), and how episode termination due to timestep limits are handled (C[B.4](#A2.SS4 "B.4 Handling of timesteps ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
The latter relates to a technical difficulty explained in App. [B.4](#A2.SS4 "B.4 Handling of timesteps ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") where one assumes for the algorithm an infinite time horizon
but then trains using a finite time horizon [[16](#bib.bibx16)].
Interpretation.
Fig. [60](#A9.F60 "Figure 60 ‣ I.2 Results ‣ Appendix I Experiment Time ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") shows that the performance depends heavily on the discount factor γ𝛾\gammaitalic\_γ (C[B.4](#A2.SS4 "B.4 Handling of timesteps ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) with γ=0.99𝛾0.99\gamma=0.99italic\_γ = 0.99 performing reasonably well in all environments.
Skipping every other frame (C[B.4](#A2.SS4 "B.4 Handling of timesteps ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) improves the performance on 2222 out of 5555 environments (Fig. [61](#A9.F61 "Figure 61 ‣ I.2 Results ‣ Appendix I Experiment Time ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Proper handling of episodes abandoned due to the timestep limit seems not to affect the performance
(C[B.4](#A2.SS4 "B.4 Handling of timesteps ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), Fig. [62](#A9.F62 "Figure 62 ‣ I.2 Results ‣ Appendix I Experiment Time ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
which is probably caused by the fact that the timestep limit
is quite high (1000100010001000 transitions) in all the environments we considered.
Recommendation.
Discount factor γ𝛾\gammaitalic\_γ is one of the most important hyperparameters and should be tuned per environment (start with γ=0.99𝛾0.99\gamma=0.99italic\_γ = 0.99).
Try frame skip if possible.
There is no need to handle environments step limits in a special way for large step limits.
###
3.7 Optimizers (based on the results in Appendix [J](#A10 "Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description.
We investigate two gradient-based optimizers commonly used in RL: (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) – Adam [[8](#bib.bibx8)] and RMSprop – as well as their hyperparameters (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and a linear learning rate decay schedule (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Interpretation.
The differences in performance between the optimizers (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) appear to be rather small with no optimizer consistently outperforming the other across environments (Fig. [66](#A10.F66 "Figure 66 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Unsurprisingly, the learning rate influences the performance very strongly (Fig. [69](#A10.F69 "Figure 69 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) with the default value of 0.00030.00030.00030.0003 for Adam (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) performing well on all tasks.
Fig. [67](#A10.F67 "Figure 67 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") shows that Adam works better with momentum (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
For RMSprop, momentum (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) makes less difference (Fig. [71](#A10.F71 "Figure 71 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) but our results suggest that it might slightly improve performance151515Importantly, switching from no momentum to momentum 0.9 increases the RMSprop step size by approximately 10×\times× and requires an appropriate adjustment to the learning rate (Fig. [74](#A10.F74 "Figure 74 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))..
Whether the centered or uncentered version of RMSprop is used (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) makes no difference (Fig. [70](#A10.F70 "Figure 70 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and similarly we did not find any difference between different values of the ϵitalic-ϵ\epsilonitalic\_ϵ coefficients (C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[B.5](#A2.SS5 "B.5 Optimizers ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), Fig. [68](#A10.F68 "Figure 68 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") and [72](#A10.F72 "Figure 72 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Linearly decaying the learning rate to 00 increases the performance on 4444 out of 5555 tasks but the gains are very small apart from Ant, where it leads to 15%percent1515\%15 % higher scores (Fig. [65](#A10.F65 "Figure 65 ‣ J.2 Results ‣ Appendix J Experiment Optimizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Recommendation. Use Adam [[8](#bib.bibx8)] optimizer with momentum β1=0.9subscript𝛽10.9\beta\_{1}=0.9italic\_β start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = 0.9
and a tuned learning rate (0.00030.00030.00030.0003 is a safe default).
Linearly decaying the learning rate may slightly improve performance but is
of secondary importance.
###
3.8 Regularization (based on the results in Appendix [K](#A11 "Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
Study description.
We investigate different policy regularizers (C[B.6](#A2.SS6 "B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")),
which can have either the form of a penalty (C[B.6](#A2.SS6 "B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), e.g. bonus for higher entropy)
or a soft constraint (C[B.6](#A2.SS6 "B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), e.g. entropy should not be lower than some threshold) which is enforced with a Lagrange multiplier.
In particular, we consider the following regularization terms: entropy (C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), the Kullback–Leibler divergence (KL) between a reference 𝒩(0,1)𝒩01\mathcal{N}(0,1)caligraphic\_N ( 0 , 1 ) action distribution and the current policy (C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) and the KL divergence and reverse KL divergence between the current policy and the behavioral one
(C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")), as well as the “decoupled” KL divergence from [[18](#bib.bibx18), [34](#bib.bibx34)] (C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"), C[1](#A2.T1 "Table 1 ‣ B.6 Policy regularization ‣ Appendix B List of Investigated Choices ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
Interpretation.
We do not find evidence that any of the investigated regularizers helps significantly
on our environments with the exception of HalfCheetah on which all constraints (especially the entropy constraint) help (Fig. [76](#A11.F76 "Figure 76 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") and [77](#A11.F77 "Figure 77 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")).
However, the performance boost is largely independent on the constraint threshold
(Fig. [83](#A11.F83 "Figure 83 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"),
[84](#A11.F84 "Figure 84 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"),
[87](#A11.F87 "Figure 87 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"),
[89](#A11.F89 "Figure 89 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"),
[90](#A11.F90 "Figure 90 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study") and
[91](#A11.F91 "Figure 91 ‣ K.2 Results ‣ Appendix K Experiment Regularizers ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study")) which suggests that the effect is caused by the initial high strength of the penalty (before it gets adjusted) and not by the desired constraint.
While it is a bit surprising that regularization does not help at all (apart from HalfCheetah),
we conjecture that regularization might be less important in our experiments
because: (1) the PPO policy loss already enforces the trust region which makes KL penalties or constraints redundant; and
(2) the careful policy initialization (See Sec. [3.2](#S3.SS2 "3.2 Networks architecture (based on the results in Appendix E) ‣ 3 Experiments ‣ What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study"))
is enough to guarantee good exploration and makes the entropy bonus or constraint redundant.
4 Related Work
---------------
Islam et al. [[15](#bib.bibx15)] and
Henderson et al. [[22](#bib.bibx22)]
point out the reproducibility issues in RL
including the performance differences between different code bases,
the importance of hyperparameter tuning and the high level
of stochasticity due to random seeds. Tucker et al. [[26](#bib.bibx26)] showed that the gains, which had been
attributed to one of the recently
proposed policy gradients improvements,
were, in fact, caused by the implementation details.
The most closely related work to ours is probably Engstrom et al. [[27](#bib.bibx27)]
where the authors investigate code-level improvements in the
PPO [[17](#bib.bibx17)] code base and conclude that they are responsible
for the most of the performance difference between PPO and TRPO
[[10](#bib.bibx10)].
Our work is also similar to other large-scale studies
done in other fields of Deep Learning, e.g.
model-based RL [[31](#bib.bibx31)],
GANs [[24](#bib.bibx24)], NLP [[35](#bib.bibx35)], disentangled representations [[23](#bib.bibx23)]
and convolution network architectures [[36](#bib.bibx36)].
5 Conclusions
--------------
In this paper, we investigated the importance of a broad set of high- and low-level choices that need to be made when designing and implementing on-policy learning algorithms.
Based on more than 250’000 experiments in five continuous control environments, we evaluate the impact of different choices and provide practical recommendations.
One of the surprising insights is that the initial action distribution plays an important role in agent performance.
We expect this to be a fruitful avenue for future research.
|
3f3458fb-10b6-4df3-b157-23b580175271
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Implications of the inference scaling paradigm for AI safety
Scaling inference
With the release of OpenAI's o1 and o3 models, it seems likely that we are now contending with a new scaling paradigm: spending more compute on model inference at run-time reliably improves model performance. As shown below, o1's AIME accuracy increases at a constant rate with the logarithm of test-time compute (OpenAI, 2024).
OpenAI's o3 model continues this trend with record-breaking performance, scoring:
* 2727 on Codeforces, which makes it the 175th best competitive programmer on Earth;
* 25% on FrontierMath, where "each problem demands hours of work from expert mathematicians";
* 88% on GPQA, where 70% represents PhD-level science knowledge;
* 88% on ARC-AGI, where the average Mechanical Turk human worker scores 75% on hard visual reasoning problems.
According to OpenAI, the bulk of model performance improvement in the o-series of models comes from increasing the length of chain-of-thought (and possibly further techniques like "tree-of-thought") and improving the chain-of-thought (CoT) process with reinforcement learning. Running o3 at maximum performance is currently very expensive, with single ARC-AGI tasks costing ~$3k, but inference costs are falling by ~10x/year!
A recent analysis by Epoch AI indicated that frontier labs will probably spend similar resources on model training and inference.[1] Therefore, unless we are approaching hard limits on inference scaling, I would bet that frontier labs will continue to pour resources into optimizing model inference and costs will continue to fall. In general, I expect that the inference scaling paradigm is probably here to stay and will be a crucial consideration for AGI safety.
AI safety implications
So what are the implications of an inference scaling paradigm for AI safety? In brief I think:
* AGI timelines are largely unchanged, but might be a year closer.
* There will probably be less of a deployment overhang for frontier models, as they will cost ~1000x more to deploy than expec
|
5702c21d-2192-4b9b-893d-e17c2320f0fd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Bayesian Argument for Theistic Fine-Tuning
Earlier, I linked to a Bayesian argument for the resurrection of Jesus - not because I think it succeeds, obviously, but because I thought Less Wrongers might be interested to know that at least since Swinburne, some Christian apologists have taken to defending their religious dogma with the language of Bayesian confirmation theory.
Another example of this is Robin Collins' version of the argument from fine-tuning for the existence of God. One of the major published objections to this kind of fine-tuning argument comes from the authors of that Bayesian argument for the Resurrection, Tim and Lydia McGrew. Another objection comes from Ikeda & Jefferys. Palonen also offers some observations.
I offer this merely for your curiosity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.