Title: User Comment Replies — LessWrong Description: A community blog devoted to refining the art of rationality Keywords: No keywords Text content: User Comment Replies — LessWrong This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. LESSWRONGLW00LoginAll of Ben Pace's Comments + RepliesLessWrong has been acquired by EABen Pace5h280Oops! Then we have taken that feature down for a bit until further testing is done (and the devs have had a little more sleep).ReplyLessWrong has been acquired by EABen Pace6h-210While we always strive to deliver the premium unfinished experience you expect from EA, it seems this bug slipped past our extensive testing. We apologize; a day-one patch is already in development.(I expect you will see your picoLightcones in the next 30-60 mins.)Edit: And you should have now gotten them, and any future purchases should go through ~immediately.Replywilliawa6h230I have not gotten them.Reply1williawa's ShortformBen Pace6h100[Comment moved here for visibility by the community.]ReplyBenito's Shortform FeedBen Pace6d120An idea I've been thinking about for LessOnline this year, is a blogging awards ceremony. The idea being that there's a voting procedure on the blogposts of the year, in a bunch of different categories, a shortlist is made and winners are awarded a prize. I like opportunities for celebrating things in the online, written, truth-seeking ecosystem. I'm interested in reacts on whether people would be pro something like this happening, and comments on suggestions for how to do it well. (Epistemic status: tentatively excited about this idea.)Here's my firs... (read more)Reply1On (Not) Feeling the AGIBen Pace7d20Thanks!Zvi's post is imported, so it's stored a little differently than normal posts. Here's two copies I made stored differently (1, 2), I'd appreciate you letting me know if either of these look correct on mobile.(Currently it looks fine on my iPhone, are you on an Android?)Reply5aphyer7d...to my confusion, not only do both of those look fine to me on mobile, the original post now also looks fine. (Yes, I am on Android.)On (Not) Feeling the AGIBen Pace7d20Same, here's a screenshot. Perhaps Molony is using a third-party web viewer?Reply1Declan Molony6dNo, I wasn't using a third-party. I was viewing it on PC. It looks normal today and I'm seeing paragraph breaks now.5aphyer7dOn mobile I see no paragraph breaks, on PC I see them.   Edited to add what it looks like on mobile:Will Jesus Christ return in an election year?Ben Pace7d*3-5Seeing this, I update toward a heuristic of "all polymarket variation within 4 percentage points are noise".Reply41Garrett Baker7d175I think the math works out to be that the variation is much more extreme when you get to much more extreme probabilities. Going from 4% to 8% is 2x profits, but going from 50% to 58% is only 1.16x profits. ReplyElizabeth's ShortformBen Pace9d121I tried to invite Iceman to LessOnline, but I suspect he no longer checks the old email associated with that account. If anyone knows up to date contact info, I’d appreciate you intro-ing us or just letting him know we’d love to have him join.Replyiceman9d260I'll pass, but thanks. Reply1METR: Measuring AI Ability to Complete Long TasksBen Pace12dΩ451I think my front-end productivity might be up 3x? A shoggoth helped me building a stripe shop and do a ton of UI design that I would’ve been hesitant to take on myself (without hiring someone else to work with), as well as quality increase in speed of churning through front-end designs.(This is going from “wouldn’t take on the project due to low skill” to “can take it on and deliver it in a reasonable amount of time”, which is different from “takes top programmer and speeds them up 3x”.)ReplyLessOnline 2025: Early Bird Tickets On SaleBen Pace15d40I have a bit of work to do on the scheduling app before sending it around to everyone this year, not certain when I will get to that, my guess is in like 4 weeks from now.Relatedly: we have finished renovating the final building on our campus, so there will be more rooms for sessions this year than last year.ReplyElizabeth's ShortformBen Pace16d20Am I being an idiot or does technically 99%< work? Like, it implied that 99% is less than it, in a mirror to how <1% means 1% is greater than it.Reply1Elizabeth's ShortformBen Pace16d40I personally have it as a to-do to just build polls. (React if you want to express that you would likely use this.)Reply2Benito's Shortform FeedBen Pace23d20Edited, should be working fine now, thx!Reply1Benito's Shortform FeedBen Pace23d*Ω340Something a little different: Today I turn 28. If you might be open to do something nice for me for my birthday, I would like to request the gift of data. I have made a 2-4 min anonymous survey about me as a person, and if you have a distinct sense of me as a person (even just from reading my LW posts/comments) I would greatly appreciate you filling it out and letting me know how you see me!Here's the survey.It's an anonymous survey where you rate me on lots of attributes like "anxious", "honorable", "wise" and more. All multiple-choice. Two years ago I al... (read more)Reply3Guive23dWhen I click the link I see this: A Bear Case: My Predictions Regarding AI ProgressBen Pace25d52I'm never sure if it makes sense to add that clause every time I talk about the future.Reply2A Bear Case: My Predictions Regarding AI ProgressBen Pace25d100Curated. Some more detailed predictions of the future, different from others, and one of the best bear cases I've read.This feels a bit less timeless than many posts we curate but my guess is that (a) it'll be quite interesting to re-read this in 2 years, and (b) it makes sense to record good and detailed predictions like this more regularly in the field of AI which is moving so much faster than most of the rest of the world.Reply16Thane Ruthenis25dIt'll be quite interesting to be alive to re-read this in 2 years, yes.The Semi-Rational Militar FirefighterBen Pace1mo41Thanks for this short story! I have so many questions.This was during training camp? How many days/weeks/month in was this?How many people went through this training camp with you? Are you still friends with any of them?How long was training, and then how long did you serve as a professional?I encourage any links you have to content about these folks in future stories. I had to check the Wikipedia page before I fully believed that the logo was a skull with knife through it.Yes, I would be interested in reading another story about your time there. This stor... (read more)Reply6P. João1moBen, my brother in chaos, you’re about to unlock Level 2 of my military lore.  Training duration: Imagine a Brazilian telenovela directed by Bear Grylls—1 year total, culminating in a final jungle camp (but with more mosquito-borne diseases). Cast of characters: Around 120 firefighters, split into four platoons. My origin story: I trained for nine years just to get in—which is ironic, considering I was an asthmatic allergic to: * Pollen * Military hierarchy * Taking life seriously But first—somes links to you: That time I almost got arrested for teaching CPR:   Because of that, I ended up working on projects like this: Could My Work Beyond ‘Haha’ Benefit the LessWrong Community? It wasn’t easy. I even failed the Brazilian Marines’ physical for... acne (true story!). Apparently, making enemy forces too scared wasn’t part of the strategic doctrine. Want to understand BOPE? Watch Elite Squad 1 & 2. Their training makes ours look like My Little Pony: Rescue Brigade. Fair warning—their version of "team building" involves fewer beach barbecues and more interrogating drug lords.Open Thread Spring 2025Ben Pace1mo20Intercom please! Helps for us to have back and forth like "What device / operating system / browser?" and other relevant q's.Replyhelp, my self image as rational is affecting my ability to empathize with othersBen Pace1mo20That sounds good to me i.e. draft this post, and then make it a comment in one of those places instead (my weak guess is a quick take is better, but whatever you like).Replyhelp, my self image as rational is affecting my ability to empathize with othersBen Pace1mo20Posted either as a comment on the seasonal open thread or using the quick takes / shortform feature, which posts it in your shortform (e.g. here is my shortform).I'm saying that this seems to me not on the level of substance of a post, so it'd be better as a comment of one of the above two types, and also that it's plausible to me you'd probably get more engagement as a comment in the open thread.Reply1KvmanThinking1moahh that makes sense. should i just move it there now?help, my self image as rational is affecting my ability to empathize with othersBen Pace1mo20FWIW this feels like it should be a shortform/open thread than a post.Reply1KvmanThinking1moA "shortform/open thread"?Will_Pearson's ShortformBen Pace1mo60I have used my admin powers to put it into a collapsible section so that people who expand this in recent discussion do not have to scroll for 5 seconds to get past it.Reply2Benito's Shortform FeedBen Pace1mo40Though if the text changes, then it degrades gracefully to just linking to the right webpage, which is the current norm.Reply1Benito's Shortform FeedBen Pace1mo216I have a general belief that internet epistemic hygiene norms should include that, when you quote someone, not only should you link to the source, but you should link to the highlight of that source. In general, if you highlight text on a webpage and right-click, you can "copy link to highlight" which when opened scrolls to and highlights that text. (Random example on Wikipedia.)Further on this theme, archive.is has the interesting feature of constantly altering the URL to point to the currently highlighted bit of text, making this even easier. (Example, a... (read more)Reply1Czynski1moI find them visually awful and disable them in settings. And avoid using archive.is because there's no way to turn that off. Not that I browse LW that much, in fairness.gwern1mo*209I have misgivings about the text-fragment feature as currently implemented. It is at least now a standard and Firefox implements reading text-fragment URLs (just doesn't conveniently allow creation without a plugin or something), which was my biggest objection before; but there are still limitations to it which show that a lot of what the text-fragment 'solution' is, is a solution to the self-inflicted problems of many websites being too lazy to provide useful anchor IDs anywhere in the page. (I don't know how often I go to link a section of a blog post, w... (read more)Reply22Mateusz Bagiński1moYou can add it as an opt-in feature.6cubefox1moThe highlights are officially called "text fragments" and the syntax is described here: https://developer.mozilla.org/en-US/docs/Web/URI/Reference/Fragment/Text_fragments3Guive1moI like this idea. There's always endless controversy about quoting out of context. I can't recall seeing any previous specific proposals to help people assess the relevance of context for themselves.MondSemmel1mo104"Copy link to highlight" is not available in Firefox. And while e.g. Bing search seems to automatically generate these "#:~:text=" links, I find they don't work with any degree of consistency. And they're even more affected by link rot than usual, since any change to the initial text (like a typo fix) will break that part of the link.ReplyThe Sorry State of AI X-Risk Advocacy, and Thoughts on Doing BetterBen Pace1mo*172The point that "small protests are the only way to get big protests" may be directionally accurate, but I want to note that there have been large protests that happened without that. Here's a shoggoth listing a bunch, including the 1989 Tiananmen Square Protests, the 2019 Hong Kong Anti-Extradition Protests, the 2020 George Floyd Protests, and more. The shoggoth says spontaneous large protests tends to be in response to triggering events and does rely on pre-existing movements that are ready to mobilize, the latter of which your work is helping build.Reply5Matt Vincent1moUnless you have a very optimistic view of warning shots, we shouldn't rely on such an opportunity.Benito's Shortform FeedBen Pace1mo334I want to contrast two perspectives on human epistemology I've been thinking about for over a year.There's one school of thought about how to do reasoning about the future which is about naming a bunch of variables, putting probability distributions over them, multiplying them together, and doing bayesian updates when you get new evidence. This lets you assign probabilities, and also to lots of outcomes. "What probability do I assign that the S&P goes down, and the Ukraine/Russia war continues, and I find a new romantic partner?" I'll call this the "sp... (read more)Reply113Thane Ruthenis1moHm, I'm not sure I understand what's confusing about this. First, suppose you're an approximate utility maximizer. There's a difference between optimizing the expected utility E[U(world,action)] and optimizing utility in the expected world U(E(world),action). In general, in the former case, you're not necessarily keeping the most-likely worlds in mind; you're optimizing the worlds in which you can get the most payoff. Those may be specific terrible outcomes you want to avert, or specific high-leverage worlds in which you can win big (e. g., where your startup succeeds). Choosing which worlds to keep in mind/optimize obviously impacts in which worlds you succeed. (Startup founders who start being risk-averse instead of aiming to win in the worlds in which they can win big lose – because they're no longer "looking" at the worlds where they succeed, and aren't shaping their actions to exploit their features.) Second, human world-models are hierarchical, and your probability distribution over worlds is likely multimodal. So when you pick a set of worlds you care about, you likely pick several modes of this distribution (rather than specific fully specified worlds), characterized by various high-level properties (such as "AI progress continues apace" vs. "DL runs into a wall"). When thinking about one of the high-level-constrained worlds/the neighbourhood of a mode, you further zoom-in on modes corresponding to lower-level properties, and so on. Which is why you're not keeping a bunch of basically-the-same expected trajectories in your head, but meaningfully "different" trajectories. This... all seems to be business-as-usual to me? I may be misunderstanding what you're getting at.1samuelshadrach1mo(edited) This is probably obvious to you, but you can expand the working memory bottleneck by making lots of notes. You still need to store the "index" of the notes in your working memory though, to be able to get back to relevant ideas later. Making a good index includes compressing the ideas till you get the "core" insights into it. Some part of what we consider intelligence is basically search and some part of what we consider faster search is basically compression. Tbh you can also do multi-level indexing, the top-level index (crisp world model of everything) could be in working memory and it can point to indexes (crisp world model of a specific topic) actually written in your notes, which further point to more extensive notes on that topic.   As an aside, automated R&D using LLMs currently heavily relies on embedding search and RAG. AI's context window is loosely analogous to human's working memory in that way. AI knows millions of ideas but it can't simulate pairwise interactions between all ideas as that would require too much GPU time. So it too needs to select some pairs or tuples of ideas (using embedding search or something similar) within which it can explore interactions. The embedding dataset is a compressed version of the source dataset and the LLM itself is an even more compressed version of the source dataset. So there is interplay between data at different levels of compression.2tailcalled1moI think the billion-dollar question is, what is the relationship between these two perspectives? For example, a simplistic approach would be to see cognitive visualization as some sort of Monte Carlo version of spreadsheet epistemology. I think that's wrong, but the correct alternative is less clear. Maybe something involving LDSL, but LDSL seems far from the whole story.2Viliam1moSo, one problem seems to be that humans are slow, and evaluating all options would require too much time, so you need to prune the option tree a lot. I am not sure what is the optimal strategy here; seems like all the lottery winners have focused on analyzing the happy path, but we don't know how much luck was involved at actually staying on the happy path, and what was the average outcome when they deviated from it. Another problem is that human prediction and motivation are linked in a bad way, where having a better model of the world sometimes makes you less motivated, so sometimes lying to yourself can be instrumentally useful... the problem is, you cannot figure out how much instrumentally useful exactly, because you are lying to yourself, duh. Another important piece of data would be, how many of the people who cannot imagine failure actually do succeed, and what typically happens to them when they don't. Maybe nothing serious. Maybe they often ruin their lives.How might we safely pass the buck to AI?Ben Pace1moΩ6165Further detail on this: Cotra has more recently updated at least 5x against her original 2020 model in the direction of faster timelines.Greenblatt writes:Here are my predictions for this outcome:25th percentile: 2 year (Jan 2027)50th percentile: 5 year (Jan 2030)Cotra replies:My timelines are now roughly similar on the object level (maybe a year slower for 25th and 1-2 years slower for 50th)This means 25th percentile for 2028 and 50th percentile for 2031-2.The original 2020 model assigns 5.23% by 2028 and 9.13% | 10.64% by 2031 | 2032 respectively. Each t... (read more)Replyryan_greenblatt1mo*Ω6106Note that the capability milestone forecasted in the linked short form is substantially weaker than the notion of transformative AI in the 2020 model. (It was defined as AI with an effect at least as large as the industrial revolution.) I don't expect this adds many years, for me it adds like ~2 years to my median. (Note that my median for time from 10x to this milestone is lower than 2 years, but median to Y isn't equal to median to X + median from X to Y.) Reply1Martin Randall's ShortformBen Pace1mo22High expectation of x-risk and having lots to work on is why I have not been signed up for cryonics personally. I don't think it's a bad idea but has never risen up my personal stack of things worth spending 10s of hours on.Reply[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.Ben Pace1mo20I agree that the update was correct. But you didn't state a concrete action to take?ReplyWhen you downvote, explain whyBen Pace2mo31I disagree, but FWIW, I do think it's good to help existing, good contributors understand why they got the karma they did. I think your comment here is an example of that, which I think is prosocial.Reply2Seth Herd2moI'm curious why you disagree? I'd guess you're thinking that it's necessary to keep low-quality contributions from flooding the space, and telling people how to improve when they're just way off the mark is not helpful. Or if they haven't read the FAQ or read enough posts that shouldn't be rewarded. But I'm very curious why you disagree.The Failed Strategy of Artificial Intelligence DoomersBen Pace2mo20FWIW in my mind I was comparing this to things like Glen Weyl's Why I Am Not a Technocrat, and thought this was much better. (Related: Scott Alexander's response, Weyl's counter-response).ReplyThe Failed Strategy of Artificial Intelligence DoomersBen Pace2mo*4-2I wrote that this "is the best sociological account of the AI x-risk reduction efforts of the last ~decade that I've seen." The line has some disagree reacts inline; I expect this is primarily an expression that the disagree-ers have a low quality assessment of the article, but I would be curious to see links to any other articles or posts that attempt something similar to this one, in order to compare whether they do better/worse/different. I actually can't easily think of any (which is why I felt it was not that bold to say this was the best).Edit: I've expanded the opening paragraph, to not confuse my comment for me agreeing with the object level assessment of the article..Reply2Ben Pace2moFWIW in my mind I was comparing this to things like Glen Weyl's Why I Am Not a Technocrat, and thought this was much better. (Related: Scott Alexander's response, Weyl's counter-response).The Failed Strategy of Artificial Intelligence DoomersBen Pace2mo161I'm not particularly resolute on this question. But I get this sense when I look at (a) the best agent foundations work that's happened over ~10 years of work on the matter, and (b) the work output of scaling up the number of people working on 'alignment' by ~100x.For the first, trying to get a better understand of the basic concepts like logical induction and corrigibility and low-impact and ontological updates, while I feel like there's been progress (in timeless decision theory taking a clear step forward in figuring out how think about decision-makers ... (read more)Reply19Davidmanheim2moI hate to be insulting to a group of people I like and respect, but "the best agent foundations work that's happened over ~10 years of work" was done by a very small group of people who, despite being very smart, certainly smarter than myself, aren't academic superstars or geniuses (Edit to add: on a level that is arguably sufficient, as I laid out in my response below.) And you agree about this. The fact that they managed to make significant progress is fantastic, but substantial progress on deep technical problems is typically due to (ETA: only-few-in-a-generation level) geniuses, large groups of researchers tackling the problem, or usually both. And yes, most work on the topic won't actually address the key problem, just like most work in academia does little or nothing to advance the field. But progress happens anyways, because intentionally or accidentally, progress on problems is often cumulative, and as long as a few people understand the problem that matters, someone usually actually notices when a serious advance occurs. I am not saying that more people working on the progress and more attention would definitely crack the problems in the field this decade, but I certainly am saying that humanity as a whole hasn't managed even what I'd consider a half-assed semi-serious attempt.Lucius Bushnaq2mo*3815I don't share the feeling that not enough of relevance has happened over the last ten years for us to seem on track for solving it in a hundred years, if the world's technology[1] were magically frozen in time.Some more insights from the past ten years that look to me like they're plausibly nascent steps in building up a science of intelligence and maybe later, alignment:We understood some of the basics of general pattern matching: How it is possible for embedded minds that can't be running actual Solomonoff induction to still have some ability to ext... (read more)Reply21The Failed Strategy of Artificial Intelligence DoomersBen Pace2mo63Can I double-click on what "does not understand politics at [a] very deep level" means? Can someone explain what they have in mind? I think Eliezer has probably better models than most of what our political institutions are capable of, and probably isn't very skilled at personally politicking. I'm not sure what other people have in mind.Reply1samuelshadrach1moSorry for delay in reply.  I’m not sure if the two are separable. Let’s say you believe in “great man” theory of history (I.e. few people disproportionately shape history, and not institutions, market forces etc). Then your ability to predict what other great men could do automatically means you may have some of the powers of a great man yourself.  Also yes I mean he isn’t exceptionally skilled at either of the two. My bet is there are people who can make significantly better predictions than him, if only they also understood technical details of AI. The Failed Strategy of Artificial Intelligence DoomersBen Pace2mo20The former, but the latter is a valid response too.Someone doing a good job of painting an overall picture is a good opportunity to reflect on the overall picture and what changes to make, or what counter-arguments to present to this account.Reply1The Failed Strategy of Artificial Intelligence DoomersBen Pace2mo263For what it's worth, I have grown pessimistic about our ability to solve the open technical problems even given 100 years of work on them. I think it possible but not probable in most plausible scenarios.Correspondingly the importance I assign to increasing the intelligence of humans has drastically increased.Reply43Raphael Roche2moDon't you think that articles like "Alignment Faking in Large Language Models" by Anthropic show that models can internalize the values present in their training data very deeply, to the point of deploying various strategies to defend them, in a way that is truly similar to that of a highly moral human? After all, many humans would be capable of working for a pro-animal welfare company and then switching to the opposite without questioning it too much, as long as they are paid. Granted, this does not solve the problem of an AI trained on data embedding undesirable values, which we could then lose control over. But at the very least, isn't it a staggering breakthrough to have found a way to instill values into a machine so deeply and in a way similar to how humans acquire them? Not long ago, this might have seemed like pure science fiction and utterly impossible. There are still many challenges regarding AI safety, but isn't it somewhat extreme to be more pessimistic about the issue today than in the past? I read Superintelligence by Bostrom when it was released, and I must say I was more pessimistic after reading it than I am today, even though I remain concerned. But I am not an expert in the field—perhaps my perspective is naïve.8Seth Herd2moI feel a bit sad that the alignment community is so focused on intelligence enhancement. The chance of getting enough time for that seems so low that it's accepting a low chance of survival. What has convinced you that the technical problems are unsolvable? I've been trying to track the arguments on both sides rather closely, and the discussion just seems unfinished. My shortform on cruxes of disagreement on alignment difficulty still is mostly my current summary of the state of disagreements.  It seems like we have very little idea how technically difficult alignment will be. The simplicia/doomimir debates sum up the logic very nicely, but the distribution of expert opinions seems more telling: people who think about alignment don't know to what extent techniques for aligning LLMs will generalize to transformative AI, AGI, or ASI. There's a lot of pessimism about the people and organizations that will likely be in charge of building and aligning our first AGIs. I share this pessimism. But it seems quite plausible to me that those people and orgs will take the whole thing slightly more seriously by the time we get there, and actual technical alignment will turn out to be easy enough that even highly flawed humans and orgs can accomplish it. That seems like a much better out to play for, or at least investigate, than unstated plans or good fortune in roadblocks pauses AI progress long enough for intelligence enhancement to get a chance.tailcalled2mo147Correspondingly the importance I assign to increasing the intelligence of humans has drastically increased.I feel like human intelligence enhancement would increase capabilities development faster than alignment development, maybe unless you've got a lot of discrimination in favor of only increasing the intelligence of those involved with alignment.Replyaysja2mo120I have grown pessimistic about our ability to solve the open technical problems even given 100 years of work on them. Why? ReplyBenito's Shortform FeedBen Pace2mo20My feelings here aren't at all related to any news or current events. I could've written this any time in the last year or two.Reply1Benito's Shortform FeedBen Pace2mo30Can you give me your best one-or-two-line guess? I think the question is trivial from what I've written and I don't really know why you're not also finding it clear.ReplyBenito's Shortform FeedBen Pace2mo20For over a decade I have examined the evidence, thought about the situation from many different perspectives (political, mathematical, personal, economic, etc), and considered arguments and counterarguments. This is my honest understanding of the situation, and I am expressing how I truly feel about that.Reply-9M. Y. Zuo2moA Three-Layer Model of LLM PsychologyBen Pace2mo80Curated. Thanks for writing this! I don't believe the ideas in this post are entirely original (e.g. character / ground is similar to the distinction between simulator / simulacra), but I'm going to keep repeating that it's pro-social to present a good idea in lots of different ways, and indeed reading this post has helped it fit together better in my mind.ReplyJan_Kulveit2mo*205Obviously there is similarity, but if you rounded character / ground to simulator / simulacra, it's a mistake. About which I do not care because wanting to claim originality, but because I want people to get the model right.The models are overlapping but substantially different as we are explaining in this comment and sometimes have very different implications - i.e. it is not just the same good idea presented in a different way.If the long-term impact of the simulators post would be for LW readers to round every similar model in this space to simulator / ... (read more)Reply111The Case Against AI Control ResearchBen Pace2mo113Curated! I think this is a fantastic contribution to the public discourse about AI control research. This really helped me think concretely about the development of AI and the likely causes of failure. I also really got a lot out of the visualization at the end of the "Failure to Generalize" section in terms of trying to understand why an AI's cognition will be alien and hard to interpret. In my view there are already quite a lot of high-level alien forces running on humans (e.g. Moloch), and there will be high-level alien forces running on the simulated s... (read more)ReplyLearning By WritingBen Pace2mo30However it is on his LinkedIn.ReplyBenito's Shortform FeedBen Pace2mo40Yes; she has come to visit me for two months, and I have helped her get into a daily writing routine while she's here. I know she has the ability to finish at least one.ReplyBenito's Shortform FeedBen Pace2mo131Thank you.It does not currently look to me like we will win this war, speaking figuratively. But regardless, I still have many opportunities to bring truth, courage, justice, honor, love, playfulness, and other virtues into the world, and I am a person whose motivations run more on living out virtues rather than moving toward concrete hopes. I will still be here building things I love, like LessWrong and Lighthaven, until the end.Reply6-13M. Y. Zuo2mo6Nathan Helm-Burger2moThough I have worries, and short timelines, so too do I have hope. I believe the next two years will be pivotal, and that we have important roles to play. Let us hold firm in the face of great danger.Benito's Shortform FeedBen Pace2mo36-5So many people have lived such grand lives. I have certainly lived a greater life than I expected, filled with adventures and curious people. But people will soon not live any lives at all. I believe that we will soon build intelligences more powerful than us who will disempower and kill us all. I will see no children of mine grow to adulthood. No people will walk through mountains and trees. No conscious mind will discover any new laws of physics. My mother will not write all of the novels she wants to write. The greatest films that will be made have prob... (read more)Reply32535Quinn2moyeah last week was grim for a lot of people with r1's implications for proliferation and the stargate fanfare after inauguration. Had a palpable sensation of it pivoting from midgame to endgame, but I would doubt that sensation is reliable or calibrated.5eigen2moI disagree extremely. This is the best moment of my life. I am at the best point of my career (powered by o1 and o3 research previews is allowing me to reach the best solutions I couldn't imagine reaching on my own, much less in such short time) it has helped me create two companies now completely different from my career (I have optimized hydroponic setups and cultivation of mushrooms purely with o1-pro to incredible levels.) My father, a doctor, tells me his patients are better than ever only because his use of o1, doctors are using it in their meetings with their most difficult to diagnose patients. My girlfriends uses it for mental health. I could continue. I  feel so empowered. I have read too much of alignment to think that we are going to make it. It's up to you -really- to choose to feel empowered or down about it. I'm honestly having the best time of my life.1tailcalled2moIs your mother currently spending a lot of her time writing novels?3stavros2moWhat is true is already so / It all adds up to normality What you've lost isn't the future, it's the fantasy. What remains is a game that we were born losing, where there may be few moves left to make, and where most of us most of the time don't even have a seat at the table. However, it is a game with very high variance. It is a game where world shaping things happen regularly due to one person getting lucky (right person, right place, right time, right idea etc). And one thing I've noticed in people who routinely excel at high variance games - e.g. Poker, MTG - is how unaffected they are when they're down/behind. There is a mindset, in the moment, not of playing to win... but of playing optimally - of making the best move they can in any situation, of playing to maximize their outs no matter how unlikely they may be. To those for whom the OP's message strongly resonates: let it. Feel it. Give your grief and fear, sorrow and anger their due. Practice self-care; be kind and compassionate to yourself as you would to another who felt what you are feeling. One morning you will wake up feeling okay, and you'll realize you've felt okay more often than not lately. Then, should this game still appeal to you, it is time to start playing again :)testingthewaters2mo2925Do not go gentle into that good night,Old age should burn and rave at close of day;Rage, rage against the dying of the light.Though wise men at their end know dark is right,Because their words had forked no lightning theyDo not go gentle into that good night.Good men, the last wave by, crying how brightTheir frail deeds might have danced in a green bay,Rage, rage against the dying of the light.Wild men who caught and sang the sun in flight,And learn, too late, they grieved it on its way,Do not go gentle into that good night.Grave men, near death, who see w... (read more)Reply72Announcing DialoguesBen Pace2mo40I am sad they're not getting as much use. I have wondered if they would work well as part of the comment section UI, where if you're having a back-and-forth with someone, the site instead offers you "Would you like to have a dialogue instead?" with a single button.ReplyAlignment Faking in Large Language ModelsBen Pace2mo117Curated!Based on the conceptual arguments for existential risk from AI, this kind of behavior was expected at some point. For those not convinced by the conceptual arguments (or who haven't engaged much with them), this result moves the conversation forward now that we have concretely seen this alignment faking behavior happening.Furthermore it seems to me like the work was done carefully, and I can see a bunch of effort went into explaining it to a broad audience and getting some peer review, which is pro-social.I think it's interesting to see that with c... (read more)ReplyPassages I Highlighted in The Letters of J.R.R.TolkienBen Pace3mo50I haven't read all of the quotes, but here's a few thoughts I jotted down while reading through.Tolkien talks here of how one falls from being a neutral or good character in the story of the world, into being a bad or evil character, which I think is worthwhile to ruminate on.He seems to be opposed to machines in general, which is too strong, but it helps me understand the Goddess of Cancer (although Scott thinks much more highly of the Goddess of Cancer than Tolkien did, and explicitly calls out Tolkien's interpretation at the top of that post).The sectio... (read more)Reply1Raphael Roche3mo "I think the Fall is not true historically".  While all men must die and all civilizations must collapse, the end of all things is merely the counterpart of the beginning of all things. Creation, the birth of men, and the rise of civilizations are also great patterns and memorable events, both in myths and in history. However, the feeling does not respect symmetry, perhaps due to loss aversion and the peak-end rule, the Fall - and tragedy in general -carries a uniquely strong poetic resonance. Fatum represents the story's inevitable conclusion. There is something epic in the Fall, something existential, even more than in the beginning of things. I believe there is something deeply rooted, hardwired, in most of us that makes this so. Perhaps it is tied to our consciousness of finitude and our fear of the future, of death. Even if it represents a traditional and biased interpretation of history, I cannot help but feel moved. Tolkien has an unmatched ability to evoke and magnify this feeling, especially in the Silmarillion and other unfinished works, I think naturally to The Fall of Valinor and the Fall of Gondolin among other things.Passages I Highlighted in The Letters of J.R.R.TolkienBen Pace3mo90I have curated this (i.e. sent it out on our mailing list to ~30k subscribers). Thank you very much for putting these quotes together. While his perspective on the world has some flaws, I have still found wisdom in Tolkien's writings, which helped me find strength at one of the weakest points of my life.I also liked Owen CB's post on AI, centralization, and the One Ring, which is a perspective on our situation I've found quite fruitful.Reply15Ben Pace3moI haven't read all of the quotes, but here's a few thoughts I jotted down while reading through. * Tolkien talks here of how one falls from being a neutral or good character in the story of the world, into being a bad or evil character, which I think is worthwhile to ruminate on. * He seems to be opposed to machines in general, which is too strong, but it helps me understand the Goddess of Cancer (although Scott thinks much more highly of the Goddess of Cancer than Tolkien did, and explicitly calls out Tolkien's interpretation at the top of that post). * The section on language is interesting to me; I often spend a lot of time trying to speak in ways that feel true and meaningful to me, and avoiding using others’ language that feels crude and warped. This leads me to make peculiar choices of phrasings and responses. I think the culture here on LessWrong has a unique form of communication and use of language, and I think it is a good way of being in touch with reality. I think this is one of the reasons I think that something like this is worthwhile. * I think the Fall is not true historically, but I often struggle to ponder us as a world in the bad timeline, cut off from the world we were supposed to be in. This helps me visualize it; always desiring to be in a better world and struggling towards it in failure. “Exiled” from the good world, longing for it.(The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiserBen Pace3mo20When the donation came in 15 mins ago, I wrote in slack (I think he should get a t-shirt)So you came close to being thwarted! But fear not, after reading this I will simply not send you a t-shirt :)Reply3Jay Bailey2moHaving reflected on this decision more, I have decided I no longer endorse those feelings in point B of my second-to-last paragraph. In fact, I've decided that "I donated roughly 1k to a website that provided way more expected value than that to me over my lifetime, and also if it shut down I think that would be a major blow to one of the most important causes in the world" is something to be proud of, not embarrassed by, and something worthy of being occasionally reminded of. So if you're still sending them out I'd gladly take one after all :)Load More