Title: Rafael Harth - LessWrong Description: Rafael Harth's profile on LessWrong — A community blog devoted to refining the art of rationality Keywords: No keywords Text content: Rafael Harth - LessWrong This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. LESSWRONGLW00LoginRafael Harth4800Ω2266110321MessageDialogueSubscribeI'm an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it's about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section. SequencesConsciousness DiscourseLitereature SummariesFactored CognitionUnderstanding Machine LearningPostsSorted by New6Rafael Harth's ShortformΩ5yΩ10852≤10-year Timelines Remain Unlikely Despite DeepSeek and o32mo5223Book Review: Consciousness Explained (as the Great Catalyst)2y14161Why it's so hard to talk about Consciousness2y21024A chess game against GPT-42y2313Understanding Gödel's Incompleteness Theorem3y193The case for Doing Something Else (if Alignment is doomed)3y1454Not-Useless Advice For Dealing With Things You Don't Want to Do3y11110How to think about and deal with OpenAIQ3yQ688Insights from "All of Statistics": Statistical Inference4y07Insights from "All of Statistics": Probability4y0Load MoreWikitag ContributionsThe Pointers Problem 5mo (+6/-6)CommentsSorted by NewestOn Downvotes, Cultural Fit, and Why I Won’t Be Posting AgainRafael Harth5h20Yeah, valid correction. ReplyOn Downvotes, Cultural Fit, and Why I Won’t Be Posting AgainRafael Harth5h2-2 If people downvoted because they thought the argument wasn’t useful, fine - but then why did no one say that? Why not critique the focus or offer a counter? What actually happened was silence, followed by downvotes. That’s not rational filtering. That’s emotional rejection. Yeah, I do not endorse the reaction. The situation pattern-matches to other cases where someone new writes things that are so confusing and all over the place that making them ditch the community (which is often the result of excessive downvoting) is arguably a good thing. But I don't think this was the case here. Your essays look to me to be coherent (and also probably correct). I hadn't seen any of them before this post but I wouldn't have downvoted. My model is that most people are not super strategic about this kind of thing and just go "talking politics -> bad" without really thinking through whether demotivating the author is good in this case. So if I understand you correctly: you didn’t read the essay, and you’re explaining that other people who also didn’t read the essay dismissed it as “political” because they didn’t read it. Yes -- from looking at it, it seems like it's something I agree with (or if not, disagree for reasons that I'm almost certain won't be addressed in the text), so I didn't see a reason to read. I mean reading is a time investment, you have to give me a reason to invest that time, that's how it works. But I thought the (lack of) reaction was unjustified, so I wanted to give you a better model of what happened, which also doesn't take too much time. Most people say capitalism makes alignment harder. I’m saying it makes alignment structurally impossible. The point isn’t to attack capitalism. It’s to explain how a system optimised for competition inevitably builds the thing that kills us. I mean that's all fine, but those are nuances which only become relevant after people read, so it doesn't really change the dynamic I've outlined. You have to give people a reason to read first, and then put more nuances into the text. Idk if this helps but I've learned this lesson the hard way by spending a ridiculous amount of time on a huge post that was almost entirely ignored (this was several years ago). (It seems like you got some reactions now fwiw, hope this may make you reconsider leaving.) ReplyOn Downvotes, Cultural Fit, and Why I Won’t Be Posting AgainRafael Harth17h97I think you probably don't have the right model of what motivated the reception. "AGI will lead to human extinction and will be built because of capitalism" seems to me like a pretty mainstream position on LessWrong. In fact I strongly suspect this is exactly what Eliezer Yudkowsky believes. The extinction part has been well-articulated, and the capitalism part is what I would have assumed is the unspoken background assumption. Like, yeah, if we didn't have a capitalist system, then the entire point about profit motives, pride, and race dynamics wouldn't apply. So... yeah, I don't think this idea is very controversial on LW (reddit is a different story). I think the reason that your posts got rejected is that the focus doesn't seem useful. Getting rid of capitalism isn't tractable, so what is gained by focusing on this part of the causal chain? I think that's the part your missing. And because this site is very anti-[political content], you need a very good reason to focus on politics. So I'd guess that what happened is that people saw the argument, thought it was political and not-useful, and consequently downvoted. ReplyI, G(Zombie)Rafael Harth1d42Sorry, but isn't this written by an LLM? Especially since milan's other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9/10 to 0/10 with no middle ground. I find this extremely offensive (and I'm kinda hard to offend I think), especially since I've 'cooperated' with milan's wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It's like, you're not worthy of me even taking the time to respond to you. The guidelines don't differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all. A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can't verify, haven't verified, or don't understand, and you should not use the stereotypical writing style of an AI assistant. ReplyI, G(Zombie)Rafael Harth2d71The sentence you quoted is a typo, it's is meant to say that formal languages are extremely impractical. ReplyI, G(Zombie)Rafael Harth2d*91Here's one section that strikes me as very bad At its heart, we face a dilemma that captures the paradox of a universe so intricately composed, so profoundly mesmerizing, that the very medium on which its poem is written—matter itself—appears to have absorbed the essence of the verse it bears. And that poem, unmistakably, is you—or more precisely, every version of you that has ever been, or ever will be. I know what this is trying to do but invoking mythical language when discussing consciousness is very bad practice since it appeals to an emotional response. Also it's hard to read. Similar things are true for lots of other sections here, very unnecessarily poetic language. I guess you can say that this is policing tone, but I think it's valid to police tone if the tone is manipulative (on top of just making it harder and more time intensive to read. Since you asked for a section that's explicitly nonsense rather than just bad, I think this one deserves the label: We can encode mathematical truths into natural language, yet we cannot fully encode human concepts—such as irony, ambiguity, or emotional nuance—into formal language. Therefore: Natural language is at least as expressive as formal language. First of all, if you can't encode something, it could just be that the thing is not well-defined, rather than that the system is insufficiently powerful Second, the way this is written (unless the claim is further justified elsewhere) implies that the inability to encode human concepts in formal languages is self-evident, presumably because no one has managed it so far. This is completely untrue; formal[^1] languages are extremely impractical, which is why mathematicians don't write any real proofs in them. If a human concept like irony could be encoded, it would be extremely long and way way beyond the ability of any human to write down. So even if it were theoretically possible, we almost certainly wouldn't have done it yet, which means that it not having been done yet is negligible evidence of it being impossible. [1]: typo corrected from "natural" Reply1I, G(Zombie)Rafael Harth2d20I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I'm surprised about the votes (didn't vote myself). ReplyWei Dai's ShortformRafael Harth3d20 The One True Form of Moral Progress (according to me) is using careful philosophical reasoning to figure out what our values should be, what morality consists of, where our current moral beliefs are wrong, or generally, the contents of normativity (what we should and shouldn't do) Are you interested in hearing other people's answers to these questions (if they think they have them)? ReplyAn argument for asexualityRafael Harth5d30I agree with various comments that the post doesn't represent all the tradeoffs, but I strong-upvoted this because I think the question is legit interesting. It may be that the answer is no for almost everyone, but it's not obvious. ReplyRafael Harth's ShortformRafael Harth11d40For those who work on Windows, a nice little quality of life improvement for me was just to hide desktop icons and do everything by searching in the task bar. (Would be even better if the search function wasn't so odd.) Been doing this for about two years and like it much more. Maybe for others, using the desktop is actually worth it, but for me, it was always cluttering up over time, and the annoyance over it not looking the way I want always outweighed the benefits. It really takes barely longer to go CTRL+ESC+"firef"+ENTER than to double click an icon. ReplyLoad More