Yearly
Weekly
Daily
Reading list
Links 25
Score: 0.9732105387750054
User feedback: None
Out links: 5092353 Raw text: 5092353Title: A conversation about Katja's counterarguments to AI risk — AI Alignment Forum Description: This post is a transcript of a conversation between Ege Erdil and Ronny Fernandez, recorded by me. The participants talked about a recent post by Kat… Keywords: No keywords Text content: A conversation ...
Score: 0.9708061011463978
User feedback: None
Out links: 11669874 Raw text: 11669874https://zeroknowledge.fm/podcast/285/
Title: Intents with Chris Goes from Anoma - ZK PODCAST Description: No description Keywords: No keywords Text content: Intents with Chris Goes from Anoma - ZK PODCAST PODCAST BLOG EVENTS JOBS ABOUT HACK ...
Score: 0.9631206927594407
User feedback: None
Out links: 11781608 Raw text: 11781608https://zeroknowledge.fm/podcast/261/
Title: Proofs, Arguments, and ZKPs with Justin Thaler - ZK PODCAST Description: No description Keywords: No keywords Text content: Proofs, Arguments, and ZKPs with Justin Thaler - ZK PODCAST PODCAST BLOG EVENTS ...
Score: 0.9510495419458912
User feedback: None
Out links: 118506 Raw text: 118506https://arxiv.org/pdf/2306.00008
Brainformers: Trading Simplicity for Efficiency Yanqi Zhou 1 Nan Du 1 Yanping Huang 1 Daiyi Peng 1 Chang Lan 1 Da Huang 1 Siamak Shakeri 1 David So 1 Andrew Dai 1 Yifeng Lu 1 Zhifeng Chen 1 Quoc Le 1 Claire Cui 1 James Laudon 1 Jeff Dean 1 Scaling Transformers are central to recent successes in n...
Score: 0.9496630444096624
User feedback: None
Out links: 3024683 Raw text: 3024683https://www.alignmentforum.org/posts/qnYZmtpNPZyqHpot9/conversation-with-paul-christiano
Title: Conversation with Paul Christiano — AI Alignment Forum Description: AI Impacts talked to AI safety researcher Paul Christiano about his views on AI risk. With his permission, we have transcribed this interview. … Keywords: No keywords Text content: Conversation with Paul Christiano — AI Align...
Score: 0.9341804777775017
User feedback: None
Out links: 11781618 Raw text: 11781618https://zeroknowledge.fm/podcast/260/
Title: ZK in 2023 with Kobi, Guillermo, and Tarun - ZK PODCAST Description: No description Keywords: No keywords Text content: ZK in 2023 with Kobi, Guillermo, and Tarun - ZK PODCAST PODCAST BLOG EVENTS JOBS ABO...
Score: 0.9320205281364848
User feedback: None
Out links: 14601 Raw text: 14601https://arxiv.org/html/2411.01030v3
Title: Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula Description: No description Keywords: No keywords Text content: Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula 1 Introduction 2 Background and Related Work 2.1 S...
Score: 0.9317122655684
User feedback: None
Out links: 404314 Raw text: 404314https://www.lesswrong.com/users/julian-schrittwieser/replies
Title: User Comment Replies — LessWrong Description: A community blog devoted to refining the art of rationality Keywords: No keywords Text content: User Comment Replies — LessWrong This website requires javascript to properly function. Consider activating javascript to get access to all site func...
Score: 0.9310096943496186
User feedback: None
Out links: 722175 Raw text: 722175https://alignmentforum.org/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress
Title: Christiano, Cotra, and Yudkowsky on AI progress — AI Alignment Forum Description: This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and El… Keywords: No keywords Text content: Christiano, Cotra, and Y...
Score: 0.9309927213332742
User feedback: None
Out links: 1060478 Raw text: 1060478https://www.alignmentforum.org/s/n945eovrA3oDueqtq/p/7MCqRnZzvszsxgtJi
Title: Christiano, Cotra, and Yudkowsky on AI progress — AI Alignment Forum Description: This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and El… Keywords: No keywords Text content: Christiano, Cotra, and Y...
Score: 0.9306693656339415
User feedback: None
Out links: 130192 Raw text: 130192http://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications
Title: chinchilla's wild implications — LessWrong Description: (Colab notebook here.) • This post is about language model scaling laws, specifically the laws derived in the DeepMind paper that introduced Chinchil… Keywords: No keywords Text content: chinchilla's wild implications — LessWrong This ...
Score: 0.9290307427478434
User feedback: None
Out links: 1793940 Raw text: 1793940Title: AXRP Episode 7.5 - Forecasting Transformative AI from Biological Anchors with Ajeya Cotra — AI Alignment Forum Description: Audio unavailable for this episode. • This podcast is called AXRP, pronounced axe-urp and short for the AI X-risk Research Podcast. Here, I (Daniel F… Keywords: No keywo...
Score: 0.9273532580822746
User feedback: None
Out links: 2084095 Raw text: 2084095Title: Evan Hubinger on Homogeneity in Takeoff Speeds, Learned Optimization and Interpretability — AI Alignment Forum Description: Below is the transcript of my chat with Evan Hubinger, interviewed in the context of the inside view a podcast about AI Alignment. The links below wi… Keywords: No keywo...
Score: 0.9273080214301624
User feedback: None
Out links: 722167 Raw text: 722167https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress
Title: Christiano, Cotra, and Yudkowsky on AI progress — LessWrong Description: This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and El… Keywords: No keywords Text content: Christiano, Cotra, and Yudkowsky ...
Score: 0.9237820041223375
User feedback: None
Out links: 498127 Raw text: 498127https://www.lesswrong.com/posts/SkcM4hwgH3AP6iqjs/can-you-get-agi-from-a-transformer
Title: Can you get AGI from a Transformer? — LessWrong Description: UPDATE IN 2023: I wrote this a long time ago and you should NOT assume that I still agree with all or even most of what I wrote here. I’m keeping it… Keywords: No keywords Text content: Can you get AGI from a Transformer? — LessWron...
Score: 0.9236605879471667
User feedback: None
Out links: 1060433 Raw text: 1060433https://www.alignmentforum.org/s/n945eovrA3oDueqtq/p/fS7Zdj2e2xMqE6qja
Title: More Christiano, Cotra, and Yudkowsky on AI progress — AI Alignment Forum Description: This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky (with some comments from Rob Bensinger, Richard… Keywords: No keywords Text content: More Christiano, Co...
Score: 0.9235666330757354
User feedback: None
Out links: 2199110 Raw text: 2199110Title: More Christiano, Cotra, and Yudkowsky on AI progress — AI Alignment Forum Description: This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky (with some comments from Rob Bensinger, Richard… Keywords: No keywords Text content: More Christiano, Co...
Score: 0.922440038597585
User feedback: None
Out links: 118664 Raw text: 118664https://arxiv.org/pdf/2209.01667
A R EVIEW OF S PARSE E XPERT M ODELS IN D EEP L EARNING William Fedus∗ Google Brain Jeff Dean Google Research Barret Zoph∗ Google Brain arXiv:2209.01667v1 [cs.LG] 4 Sep 2022 A BSTRACT Sparse expert models are a thirty-year old concept re-emerging as a popular architecture in deep learning. This ...
Score: 0.9215619201689722
User feedback: None
Out links: 11481364 Raw text: 11481364https://zeroknowledge.fm/podcast/337/
Title: Restaking Research with Naveen & Tarun - ZK PODCAST Description: No description Keywords: No keywords Text content: Restaking Research with Naveen & Tarun - ZK PODCAST PODCAST BLOG EVENTS JOBS ABOUT HACK ...
Score: 0.9213010782989396
User feedback: None
Out links: 11166641 Raw text: 11166641https://zeroknowledge.fm/captivate-podcast/337/
Title: Restaking Research with Naveen & Tarun - ZK PODCAST Description: No description Keywords: No keywords Text content: Restaking Research with Naveen & Tarun - ZK PODCAST PODCAST BLOG EVENTS JOBS ABOUT HACK ...
Score: 0.9200652406474674
User feedback: None
Out links: 5092370 Raw text: 5092370Title: AXRP Episode 31 - Singular Learning Theory with Daniel Murfet — AI Alignment Forum Description: YouTube link • What’s going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is… Keywords: No keywords Text content: AXRP Episod...
Score: 0.9185957983729204
User feedback: None
Out links: 22810 Raw text: 22810https://www.lesswrong.com/posts/midXmMb2Xg37F2Kgn/new-scaling-laws-for-large-language-models
Title: New Scaling Laws for Large Language Models — LessWrong Description: On March 29th, DeepMind published a paper, "Training Compute-Optimal Large Language Models", that shows that essentially everyone -- OpenAI, DeepMind… Keywords: No keywords Text content: New Scaling Laws for Large Language Mo...
Score: 0.9167238077211532
User feedback: None
Out links: 11669875 Raw text: 11669875https://zeroknowledge.fm/podcast/283/
Title: BabyAGI, Agents and Cutting-edge AI with Yohei - ZK PODCAST Description: No description Keywords: No keywords Text content: BabyAGI, Agents and Cutting-edge AI with Yohei - ZK PODCAST PODCAST BLOG EVENTS ...
Score: 0.9146148460355693
User feedback: None
Out links: 5217330 Raw text: 5217330https://alignmentforum.org/posts/LY7rovMiJ4FhHxmH5/thoughts-on-hardware-compute-requirements-for-agi
Title: Thoughts on hardware / compute requirements for AGI — AI Alignment Forum Description: [NOTE: I have some updates / corrigenda at the bottom. ] … Keywords: No keywords Text content: Thoughts on hardware / compute requirements for AGI — AI Alignment Forum This website requires javascript to p...
Score: 0.9143604751418759
User feedback: None
Out links: 4581732 Raw text: 4581732Title: 2022-5-1: PolyLoss, Subquadratic loss landscapes, Large-scale training on spot instances Description: An Extendable, Efficient and Effective Transformer-based Object Detector Keywords: No keywords Text content: 2022-5-1: PolyLoss, Subquadratic loss landscapes, Large-scale training on spot ins...