Fresh Lens Podcast
Fresh Lens Podcast
The Techno-Optimist Manifesto
In this episode, Trish & Hirad discuss Marc Andreesen's techno-optimist manifesto, advancements in AI, accelerationism vs decelerationism, the concept of the right-wing progressive, and the historical follies of progressivism.
Links to the articles discussed:
- The Techno-Optimist Manifesto
- The Rise of the Right Wing Progressive
- Yudkowski's call to shut down AI in Time Magazine
[00:00:00] **Hirad:** How's it going Trish?
[00:00:03] **Trish:** I'm good, how are you?
[00:00:05] **Hirad:** I'm good. I just saw a tweet just before jumping on the recording that I think is a little bit pertinent to our topic of discussion. Do you know who Brian Johnson is? And if it's Johnston or Johnson
[00:00:17] **Trish:** the most generic name I've ever heard in my
[00:00:19] **Hirad:** right. Yeah. I don't remember exactly what this guy's done, but he's some like filthy rich billionaire.
And the reason why he's famous these days is that he's taking all his money and he's pumping it into longevity research, primarily kind of experimenting on himself. So he's kind of like pub, like posting tweets on all these things that he's doing to his own body and trying to make himself younger. He's a very, he's become a bit of a polarizing figure.
Honestly, I haven't really followed his development too much to know what exactly he's doing, but he used to kind of look like a normal person, and now he's kind of look looking a bit like a vampire.
[00:01:01] **Trish:** Is he the guy who was messing with, like, the young people's blood? Like doing weird blood transfusions and stuff?
that's
[00:01:07] **Hirad:** totally, I don't know if he was the guy, because I think I've heard about that years ago, so I don't know if it was this guy. He's kind of like, kind of popped into the scene. More recently, but it's, that's, that kind of thing is exactly up his alley. I'm, I'm sure I would be shocked if he hasn't already tried that.
Cause that was like one of the earliest longevity things, but the latest thing that he, he posted a tweet about was how he's injected Botox into his penis and got a extension by about one centimeter. So that, that can kind of be our intro to what we think of technological progress.
[00:01:43] **Trish:** I'm just wondering if one centimeter has ever actually, like, improved the sex for his partner. At, well, I don't know, but like, it just seems like this is like men competing, , I really don't think that the vast majority of,
[00:02:00] **Hirad:** Twitter is amazing for this because the one of the top replies to his post was from Paul Graham who's like a famous venture capitalist and his post was I think this may be revealing more than you intended if you thought that one centimeter was worth going through all that trouble.
[00:02:17] **Trish:** Yeah. I don't know. That's so weird.
[00:02:21] **Hirad:** But with that digression, what are we talking about today?
[00:02:25] **Trish:** We are talking about techno optimists?
[00:02:29] **Hirad:** Yeah, I guess that's a that's a word that some people have, have used for them.
[00:02:35] **Trish:** Because there was a manifesto that was published on Substack.
[00:02:39] **Hirad:** Right. So I'll provide a bit of a background. I actually don't, do you, do you, have you followed the, some of the developments around and the debates around artificial intelligence
[00:02:50] **Trish:** Only in like the most sort of outer orbit.
[00:02:55] **Hirad:** Have you heard of the safety, like AI safety discussions?
[00:02:59] **Trish:** Nope.
[00:03:00] **Hirad:** Okay. So I think this will be useful background for our listeners. So obviously over the last year there was this explosion in artificial intelligence capabilities. We've got chat GPT, we've got these image generation models. Now we're getting like video generation models.
And obviously the pace of development is, is getting faster and faster as well. And when chat GPT kind of became this huge hit, there is this group of people that have been kind of beating the drum of AI doom for many, many years and, you know, they have some non trivial followings people listen to them some of them are part of this movement called Effective Altruists, of which such famous outstanding citizens as Sam Bankman Fried were members.
But basically what they started arguing for is, Hey, we've come up with these theories of how AI can be dangerous. And their idea is like, it can be, if you have something like an self improving AI, then it can quickly become more intelligent than humans, and it can be an existential risk. To humans.
And based on that, they started publishing articles in like Time magazine calling for a moratorium on all AI development and calling for in some cases, airstrikes against any country that doesn't respect the
Oh boy.
The moratorium that was being proposed, like airstrikes against their data centers, where they have the computers, where they're hosting AI models.
So that that's been like one side of the debate. And at some point it's, it was almost exactly a year ago where a group of them kind of signed this letter to the white house asking for a six month. Ban on all AI development, which didn't happen, but now it's been like 12 months after they asked for the six month ban and it's like, well, what would you have done in that time?
I guess those things are exactly where they were back then. But to oppose that, these are, these are like the people, the people who don't like these AI safety types, they're calling, they call them decels, like decelerationists. Of course, it's like a word play on incels as well. It's like involuntary celibate, involuntarily celibate men, angry men online.
The opposite of that is the accelerationists, which is kind of where this techno optimist manifesto comes in, written by Mark Andreessen, who is also a very well known venture capitalist. And yeah, I guess that's kind of like the, the background of what we're talking about.
[00:05:35] **Trish:** Mm hmm. So, when you sent me Mark Anderson's, is that, did you just say, is his
[00:05:40] **Hirad:** Andreesen
[00:05:41] **Trish:** His techno optimist manifesto, it was literally the most depressing thing I'd ever read in my life. I
[00:05:46] **Hirad:** why, what, what was your, what was your raw feed raw reaction?
[00:05:51] **Trish:** mean, my raw reaction was that, sort of, they've, they've got it all wrong, I think, in terms of what they think technology is going to be able to solve. And not I guess I mean just like fundamental conditions of being human and living a happy, fulfilling life.
[00:06:20] **Hirad:** I guess maybe I would love to dive deeper into that and, and yeah, get your, get all of your thoughts on this. But, but maybe before we do that, maybe we should recap for any listeners kind of what's the gist of the techno optimist. Manifesto.
[00:06:38] **Trish:** It's basically that technology will solve all of our woes, including any sort of climate issues, overpopulation you know, suffering, any physical suffering, material wealth, that all this is we, if we can just be brave enough to envision a boundless world, it will be ours. The markets can solve everything intelligence is the highest, like, achievement of humanity and intelligence will save us.
[00:07:08] **Hirad:** Yeah. So I have some, a couple of quotes that kind of stand out is basically growth is the purpose of life and the ultimate good. That we either, we're either growing or we're dying. That technological progress is growth. That there is no limit to how much we can grow, but basically the universe is, It's your oyster, , and technology is the ultimate source of growth.
And therefore it's also the ultimate good and effectively that we can basically achieve utopia. We can make everybody richer, we can make everything cheap and abundant. But there's this one caveat he kind of like throws in there that we are not utopians. He says we are, we believe in . He said, we are not utopians. We are adherents of what Thomas Sowell calls the constrained vision. Which we can talk about a little bit more as well, but that's basically the, the gist of it. And it's a pushback against this, all this, all these calls for deceleration and conservatism, I guess.
[00:08:10] **Trish:** Mm hmm.
Do you think you're a techno optimist?
[00:08:13] **Hirad:** no, hang on. I, so I, so before I want to get your raw reactions to it. So like you were before I provided the background, you were talking about what parts of it rubbed you the wrong way. Let's, can we, let's go back to that. I
[00:08:25] **Trish:** Okay. Okay. I mean, on one hand, I, I buy into it to a certain extent. I do think that you know, when you look back at the last couple hundred years of history, sort of like since the industrial revolution, it was always that we're on the brink of collapse. You know what I mean? There was always some alarmism, whether it was overpopulation and then we had the green revolution and it was.
You know, there's just been like a series of these sort of alarmists, like, oh my goodness, everything's going to collapse, but technology does invariably save the day. And I like technology. And I think that, I mean, I love living in a culture where we have analgesia, like, you know, good healthcare.
And I do feel like I can live a very healthy, happy life. But on the other side, these guys just like, take it. To the extreme and almost, , moralize it in a way that I'm very uncomfortable with. And I think that also, if you look, for all our technology and all our abundance and all our material crap, I don't know if we're living happier, more fulfilling lives than we were, like, in his little manifesto, he really denigrated the times a human lives in mud, lived in mud huts.
You know, and I kind of, sometimes, especially like with all the Greyburn and everything we've read, like, I don't know, maybe people in mud huts were happier than we are today. There's something to be said about that. And I think that it rubs me along a wrong way. It sort of, to me, harkens back to this, , , Cartesian blank slate where technology, there's no nature, technology can fix everything about you or change everything about you and we can just completely unshackle ourselves from any nature around us or biological realities.
[00:10:13] **Hirad:** yeah,
[00:10:13] **Trish:** were kind of my knee jerk reactions to it.
[00:10:18] **Hirad:** yeah, this, this might be shocking to listeners, but I, but I wholeheartedly agree. I think, and I, I think I wouldn't
[00:10:24] **Trish:** Wait, with me or the techno optimists? Who do you, agree with Oh, wow.
[00:10:30] **Hirad:** And I think I wouldn't have wouldn't have agreed maybe I would be less inclined to agree pre Graber. But, but definitely, I think.
I think you're right. And I also think he's actually saying something in, in the manifesto, he's taking a very extreme position. And he tries to kind of mask it by saying, but kind of like having this throwaway line that we are not utopians, but this is very much a utopian, , vision of perfectibility.
And to your point one, one part about it, so I think there was two things that, that over the last few years. I feel like I'm always coming back to this every episode when we talk about anything. But two things over the last few years that probably nudged me in the direction that I would be less likely to be a techno optimist.
One was one was the fact that we read all these works from Graeber. And, but the other one is COVID. Because he's taking this moralizing stance about the importance of technology. At some point, he says, like, if you, if you slow down artificial intelligence development, you're basically, that's akin to murder because artificial intelligence can save lives and by.
Preventing its development, you are literally killing people. This is like, this is what he's saying. And to me, that is exactly the kind of moralizing word play that the woke have when they say like, you know, silence is violence or and literally anything except actual violence is violence.
Or if you don't use the right pronouns, it's violence. And he's basically kind of using that same weaponization of morality, basically to to argue his point and, and the danger there is it doesn't matter which side you're, whether you're talking about accelerating something or decelerating it.
When you have that kind of a mindset, to me, this is a very dangerous, , very dangerous viewpoint because then the next step from that is, well, if, if stopping artificial intelligence is literal murder, then whatever we do to prevent that Is good and justified, and it kind of brings me back to kind of the early eugenicists in the US, the original American progressives who, who just discovered the new science of evolution and genes, and they were going through all these changes with the industrial revolution, and they were they didn't know what to do with it.
And they were trying to make sense of the new knowledge that they had. You know, it's almost like it's, it's horrible when you just have a little bit of knowledge or maybe a little bit of technology and you haven't quite figured out what to do with it and what it means or where it fits. And then you try to, you try to do dangerous things with it.
And I think what Andreessen is doing is exactly what the progressives from a hundred years ago would be doing. Back then they, they had just discovered evolution. They were just kind of integrating the knowledge of biology and genes and how all that works into kind of the people's worldviews and all these progressives of the time, they were worried about the impact on the gene pool.
Sounds a little bit like people who worry about things like climate change or other kind of grand things that we can't really control and they want to, they wanted to change society in order to be able to get the gene pool under control effectively by removing from the gene pool those that they saw As unfit for the next generation.
So anyone that
[00:13:57] **Trish:** which really hinged on intelligence a lot,
right?
[00:14:00] **Hirad:** exactly that's exactly that's actually another thing that they have in that they had in common with the techno optimist manifesto that intelligence was the highest good. They didn't really take a complete view of humanity. It was all on this intelligence axis. And so anyone that, that was considered, air quotes, feeble minded could be forcefully sterilized.
And there was actually a Supreme Court case where the Supreme Court at the time decided that it was absolutely perfectly constitutional for these these eugenic laws to be and, and forced sterilization laws to be on the books. And in fact, that precedent I just learned while researching for this episode, that president has never been overturned.
Scarily enough, it has just been kind of ignored over time. But, yeah, that's that was kind of my reaction to it as well. A lot of dangerous dangerous parallels
[00:14:55] **Trish:** Yeah, I think, like, there's a hubris of the whole thing, too, that really rubs me the wrong way. It's sort of, it really, like, it seems like they think they're God. And I don't actually mean that in a religious way, but, like, it's like, they're not approaching anything. I mean, with any caution or humility at all.
I mean, like we did, like, I'm a big advocate and proponent of like using more nuclear power and stuff, but when we unlocked that technology, we do also have the ability to literally blow up the entire earth. So, I mean, like these things can be double edged swords. I mean, it's very, you know, that's not a profound point or something, but it does just sort of seem like they are like completely dismissive of any potential downside.
Mm
[00:15:43] **Hirad:** yeah, exactly. Which is very, like I'm, it's, it's such a emotional like kind of like an impedance mismatch for me. Cause by default, I think I would be a techno optimist., This is a bit of a tangent, but I have this Spanish teacher that I really like who it's like we, we get together and we have conversations in Spanish and we actually talk about political stuff, which is why I'm, I really enjoy my Spanish classes.
And she's been bringing these debate topics for us to to talk about for the last little bit and every topic that I've talked to her about, I feel like I've I've now realized that all questions just come to one, down to one thing for me is does supporting a particular position mean an increase in the power of the state or then decrease in the power of state?
That's really the only thing I care about. So one example would be like, if, if we talk about like. The death penalty, should it exist or not? My question, my thing would be no, because I don't want to let the government kill people. That's really all I care about. I don't care about the criminal the impacts on the death penalty, the impacts on, on crime rates or, or anything like that.
I just think that the government should not have that power. And. I again forget where I was going with that. I did have a point at something.
[00:17:01] **Trish:** What's in that glass you're sipping? I'm
[00:17:04] **Hirad:** know. It's also like 7 40 p. m. here.
[00:17:08] **Trish:** So you're saying that maybe, like, you can't see the technology, like, it would probably most likely end up being wielded by the state and you're worried about sort of that sort of concentration and power through technology? Is that where you're going with it or?
[00:17:29] **Hirad:** I forget what we were talking about even before this that that made me made me think of this point that there's no
[00:17:37] **Trish:** Being like, being like God, I don't know if that was,
[00:17:48] **Hirad:** Right. I think what I, what I was, what I was thinking about with like being, yeah, what I was thinking about was that a lot of the positions that I would otherwise support, it seems like when they do turn into law, they, they kind of become the most perverted version. Of that position. So one example I've been thinking about recently was the was Canada's medically assisted suicide laws.
And at the time they were coming into existence, I thought that's a great thing because if someone is terminally ill and you know, they're in pain, why wouldn't you extend the same courtesy to them as you would to your dog? Right. And that's what I thought at the time. Fast forward. I don't know. How many years have you had this?
Like six. It's not that long it's five or six years that this has been law in Canada and it has become the most perverted version of this, where we have like morbid advertisements promoting death for things that people should not have to deal with. Like a competent healthcare system should just be able to deal and like provide provide an actual solution for these people.
We've had people who were You know, veterans of the military and have chronic pain and you know, the suggestion of the health care system to them is often kill yourself, you know, the, the wait time for a new wheelchair is way too, way too long. If you're, if you can't live without it, you can always kill yourself.
And now they want to expand that to anyone with a mental illness. And in 2024, literally everybody and their mother has a mental illness or can easily get diagnosed with one. So there's no end to this, to this perversion. So like I come, I come back to this question of, well, I, I thought I supported this position.
But clearly whatever the thing that I support, I have to, I have to caveat it for like, what would it, what would the most perverted version of it look like? Because that's the one that might, will probably actually be real. And, and I, I feel that way about this techno optimism manifesto, because I, I do think there's a lot of good, I think he is right about a lot of the things that technology can give us.
But we got to think about what the most perverted version will, will be if this mindset is what we're taking,
[00:20:00] **Trish:** And, I mean, the techno optimist stuff rubs me personally the wrong way as a woman. And this is something I've been thinking about kind of a lot lately. This isn't any like my own ideas at all. I was listening to Mary Harrington on a different podcast and it really sort of like a lot of things that really, right.
I haven't read her book yet. And so you're not really getting a good summary of her argument, but that's kind of what set me down this path. But I just feel like, feminism of the last, like, I don't know, 50, 70 years or whatever has really, you know, like, I'm very happy to not, you know, be like, relegated to the home or like not allowed to vote or many of these things, so I don't want to completely poo poo it, but it seems like it's mostly been using technology to try and turn me into a man.
Right? It's like, we're going to give you the pill and this means that you're like, not going to get pregnant. And you can delay it, but you're probably going to end up. Cause if you want to like actually have a job and actually be successful and find like a reasonable partner, you're going to have to delay pregnancy so long that like, they literally call it, if you're over 35, it's like geriatric.
pregnancy, right? So you're gonna like completely ignore like the biology of like when it's good and healthy for you to have a baby, but like, don't worry. Then we've got like all this IVF stuff we can do. So maybe you can like continue to defy nature and have a baby when you're this old. And then if that doesn't work, Oh, like maybe we can get a surrogate, we can have someone else have your baby.
And there's also like a weird thing of like. Sort of like quote unquote like high status. I mean like women who are maybe like rich and famous They just like choose to have someone else have their baby because like pregnancy somehow seems to I don't know Like why they are but like affect your figures too much or your body or I don't know why they sort of choose not to And then, like, beyond that, I feel like I have to have plastic surgery, like, basically until I die to keep me looking, like, young and attractive.
And it's just, like, it all, all this technology feels like it's being used to turn me into something that I'm not. And that isn't healthy for me at all and sort of like, try and make me a man. And I was like, I don't want to be a man. Like, I want to exist in a society where like, I can thrive for like, what is just like a biological reality of being a woman.
And I mean, I don't have kids. So I'm, you know, making this argument from sort of a different place, but you know what I mean, right? Like, it just seems so messed up that they keep throwing technology at women. To fix problems that, like, I shouldn't even have. Like, I should just be able to, like, have babies in my 20s when I'm meant to.
Without, like, having that to, like, you know, completely kick me out of any, like, career path or You know, whatever. And much less not
even like start to like culturally try and find a man to commit to you to have babies in your 20s, which is like a whole other like thing that I feel like culturally is messed up.
[00:23:09] **Hirad:** yeah, everything you just described is just such a perfect little package of dystopian ism. Like, it's just it's basically exactly brave new world. I think and I, I, I don't remember the whole narrative of that. Of that book, but I do remember they used to grow their babies, which is like another thing that's kind of being promoted by techno optimists of another variety.
Maybe not exactly Andreessen but this whole notion of being able to incubate babies outside of anybody is. It's definitely on the horizon that it's being worked on. So that was definitely an element of Brave New World. And, and another was like government kind of handing out drugs to keep people happy all the time.
Kind of sounds like Vancouver except I don't know if they're happy or not. And I, I distinctly remember, I read it when I was a teenager. I distinctly remember. There was this one element I don't remember its significance, but there was there was a group of people that were like the savages and they were in a fenced off area and they were living a primitive life.
And I remember just reading this book and thinking, that's where I want to be. I want to be with the savages. Definitely don't want to be. Yeah. Like every, everything you just described just sounds so dystopian and the whole, the, the, the career side of that equation, particularly for women is to me, such a con that it's been sold as like a, as a win that like, Hey, you get to have your own career because again, kind of like what we read in, in bullshit jobs, you used to, like one person used to be the breadwinner in a household, And you used to be able to have a great life.
[00:24:48] **Trish:** and, I think that, I mean, I think that like women should be able to like, if they want to be the breadwinner or have the career, like the idea is just that like, it shouldn't be like, there should be enough flexibility that you could make any number of arrangements work. Right. It shouldn't be one or the other.
[00:25:06] **Hirad:** well, I think, I do think I do think normalizing a single breadwinner in a household is a good thing. I don't care which one, which person I don't care. But, but it is, it is definitely a con to make people work twice as much and not be able to, to gain the same standard of living. For, for, for a lower standard of living, actually, I would say.
[00:25:31] **Trish:** Mm hmm.
So like, I don't know, for all it's, I guess it's, yeah, just coming back to that double edged sword thing of technology, right? And I feel like if I'm critical anything about, you know, like, the pill or any of these technologies, you sort of just get labeled as some sort of, like, anti feminist or whatever.
But I, it just feels like all of feminism of the last 50 years has been trying to turn me into a man. And like, I don't want to have sex like a man. I don't want to like, I just can't, I just be a woman. Like, sheesh.
[00:26:07] **Hirad:** Yeah. So the trigger for for us to talk about this subject with the Techno Optimist Manifesto was another Substack article that I read. And this one comes from a Substack called The Upheaval by the pseudonymous writer NS Lyons. And one of the things that he points out is that All of these you know, Andreesen in his techno optimist manifesto, he's got like some conservatism, some libertarianism, some neo reaction elements and, some like anti woke stances and the thing that this writer N.S. Lyons points out in his piece is that these things don't even really go together, especially that element about the we're going to be conservative and we're going to be cautious because there's nothing, there's nothing conservative or cautious about anything else.
He, he said in his, in his piece and being conservative and cautious requires some kind of respect for what came before or what, how things were in the past. Whereas the whole piece is about and this is kind of like a , a, a Famous saying that famous phrase that Mark Zuckerberg used, and now it's like a way of life for some people in like Silicon Valley is called move fast and break things, which is just rapid change, rapid advancement and rapidly getting rid of what was there before in the interest of what of new things that are, that are coming.
And. And in that kind of framework, there can't be any respect for, you know, there can't be conservatism because there can't be any respect for what came before. If you think technology and progress is the ultimate moral good you aren't going to want to slow down. And so what. What N.S. Lyons is saying in his piece is that this is not like a conservative position.
This is not a libertarian position. What it is is like the right wing progressive position. So it's right wing, specifically opposed to left wing, just in that the left wingers are way more egalitarian. They're way more interested in everybody, everything being equitable, and there being no hierarchies, whereas the right wing still believes in hierarchies.
They still believe in like a meritocratic system. They still believe that some people are better at things than others. So they kind of like want to reject the woke ideology in that way, but there's still progressives. There's still, and that's, I think that's the part that harks back to the progressives of a hundred years ago, because they still believe in this ultimate science and technology as the ultimate way of advancing human society.
And. And kind of like to your point about the, the eugenicists of a hundred years ago, it's a lot of it is along one axis of intelligence still, except this time it's like artificial is what they're talking about.
[00:28:55] **Trish:** I mean, maybe it's just because I've always been like low level, a little bit self conscious about my intelligence. So maybe that's why I'm gonna take this tack, but it really does, like, I, I don't know why this always, humans think this is like the end all and be all for human existence.
[00:29:12] **Hirad:** Yeah,
[00:29:13] **Trish:** Like it's literally the only thing that people are just so fascinated by trying to Define it trying to measure it trying to figure out who's got it try to figure out if we're like a species getting smarter or dumber or You know, like higher intelligence.
I don't know, like, what do you think about sort of the hoopla that is made of human intelligence?
[00:29:36] **Hirad:** I think I, I'm kind of like, there's some seeds of techno optimism in me. So I'm, I, I'm very sympathetic to a lot of the things that, that Andreessen says I do think intelligence is. As he describes the driver for a lot of growth intelligence and like technology and these things kind of go in hand in hand because you need one to produce the other.
And I do think that with artificial intelligence, we are going to save lives. We are going to get better at producing energy and we are going to do all kinds of cool things. And probably in 100 years from now. We'll look back and it will be the same thing where, you know, the poorest people a hundred years from now will be better off than the richest people today because of technology that's probably been produced by intelligence.
The problem, I think the part of this, this techno optimist manifesto and like it's whole moralizing stance that I think is rubs both of us the wrong way is, is that it reduces humanity to just that. Everything just drops into a single dimension.
And I think there's a lot of human existence that is outside of intelligence. You don't need maximum intelligence for happiness. So your point about people who who lived in mud huts, not that they would be less intelligent, but they definitely would have less technology.
[00:30:54] **Trish:** I want to loop back to when you talked about, what was it you said exactly, like, the poor are living better, the poor will live better in the future? You said something like along that line.
[00:31:04] **Hirad:** Like the poorest people in a hundred years from now will probably be better off than
[00:31:07] **Trish:** I know, but like, can you think of how messed up it is that in the last, like, maybe two, like, Maybe 5, 000 years ago, like, there wasn't even poor people, like, that probably wasn't even a thing, like, if you look at a lot of these, like, indigenous cultures and stuff, like, there wasn't this whole stratification of people at the bottom. Like, I mean, I guess maybe there were, there were slaves and stuff, but it wasn't, like, I kind of feel like technology has, like, allowed us to, like, create poor. As, like, a socioeconomic
[00:31:37] **Hirad:** I think the techno optimists argue that. Yeah. But I think the techno optimist would argue that everybody was poor back then. So either it was like, everybody was poor. Or maybe let's say even like at the time of
[00:31:50] **Trish:** What, just because have
[00:31:50] **Hirad:** a of, shit made out of plastic, like, we were poor? Come on.
I think, well, like in terms of your access to, let's say I'm, I'm taking I'm playing devil's advocate here in terms of your access to you know, life saving medicine in terms of your access to basic sanitary conditions in terms of how much physical labor you had to exert in order to just meet the demands of daily life I think in
[00:32:18] **Trish:** two I'll give you, that one, that one, I don't feel like having to actually move around and be active is really, like most people would probably be much happier.
[00:32:29] **Hirad:** Well, so, so that's the thing I actually, I, that's the part where I mean, like the whole, this whole thing where everything gets reduced to technology is an intelligence is something that rubs me the wrong way is that there is that element where, yeah, like right now we all go to the gym. To feel good. Some, some of us, I think some people maybe don't, some people are on their way to allow technology and intelligence, to turn them into that like fat blob from Wall-E, right now, where it's just like kind of teleported around on his like hoverboard seats and, and letting technology do all his work for him.
Right. But like some of us go to, to the gym to simulate. A real human existence. And, and I wonder to what extent we are losing something by not being in the real world. Like one of what, like it used to be for years, personally, I, I always wanted to learn more things on the intelligence axis. I wanted to learn new concepts.
I wanted to learn, you know, more and more complex. Things with like with regards to engineering and technology and science and all of that. And if I have any free time right now, the kinds of things that I want to learn is like dance. I would love to learn woodworking. I would love to learn physical things.
And things that are a poor use of my time, if you, if you take an economist's view of it. But they literally make me feel more alive And that element is, is just something that is not accounted for in any of these like single dimensional descriptions of what we need to optimize.
[00:34:11] **Trish:** Yeah. Exactly. Because I mean, learning is the joy of life. So I feel like I don't want to moralize, like, what, like, you know, like one hobby over another, what you're learning about but, and the, the goal shouldn't be to not learn, but it just, it does feel like as, yeah, as a society, you know, at right now, if you want to make money, start making STEM toys for children.
To get your kids into STEM, right? Because I feel like that's like everything where we've put like all the, you know, societal, like this is what matters. These are the good things, you know what I mean? Like these are, these are what you want your kid to do. This is what like the smart people do. And, you know, that's fine if that's what like kids have natural inclinations to.
But I kind of feel like you should let every kid just explore what they're naturally drawn to and don't tell them that if they're. Drawn to, you know, like working with people or taking care of people or something that that somehow, you know, less intelligent or less valuable than, I don't know, writing some code.
[00:35:14] **Hirad:** yeah, exactly. And I remember, I used to work in downtown Vancouver and I think everybody, everybody who works in a, in like a white collar office would just take their lunch, lunch hour at the same time and around where we work, there were lots of big tech companies like Amazon had a building there, Microsoft has, has a building there.
And every lunchtime I would like walk around the streets of downtown Vancouver and I could spot groups of software engineers. Cause they would always have their, like where they're be wearing their jeans and hoodies and they would have like a little office badge hanging from their belt or like somewhere.
And they would almost always be a little plump and overweight. And that's kind of like my, my. You know, the stereotypical image of what a software engineer is. And I always look at them and be like, I, I don't want to be that. But more importantly, I think as I've kind of done more and more physical activities, it's almost like I pity them when it's, it's so clear when you look at someone and it's like, this person doesn't know how to use the very first tool that he was born with.
Doesn't know how to use his arms properly. Like the, the, the physical dexterity is not there except for maybe typing on a keyboard in that way. It's like, it's, it's very much there, but there's a whole thing about learning how to use your body in a refined fashion. Just that, and that does just purely with like, that's not even to produce anything that's just for like dance or
[00:36:42] **Trish:** just like a musical instrument, like, I don't know, like, yeah, technology has freed us in all these ways that now, you know, like, I don't have to know how to make music anymore. And no one in my household needs to know how to make music because we have access to all of this. So as it's like, and I mean, like, I love this stuff.
I'm into tech, but like, it does feel like maybe we've thrown out something prematurely.
[00:37:03] **Hirad:** Yeah. And to the point with the musical instrument, part of what that entails is when you learn any kind of musical instrument, there is some element of pain because you're, you're really, and pain is just like feedback from reality, right? That's like, it's, have to do something with your, you know, with your hands.
Or, or any part of your body that you're using for this. Say if you're playing the guitar, your skin, your, the skin at the tip of your fingers needs need to get these calluses and it needs to toughen up a little bit, right? If you're playing the piano, you need to kind of be able to stretch your fingers in particular ways.
But in all cases, there's some feedback from the actual world and that what we do now is like, well, if you want to, if you wanted to get into making music today, maybe not today, but very soon, you can probably start describing the kind of music that you're. You're going for maybe with some inspiration from like different famous artists and some kind of artificial intelligence tool will just produce things for you.
And then you can kind of keep talking to it and keep refining. That thing. So we keep producing all these like wonderful abstractions to shield us from essentially reality. But the downside there is we don't know about reality and we also don't know what happens when we are so deeply abstracted away from that reality.
We don't know what we're capable of. We don't know how blind we'll be.
[00:38:35] **Trish:** Yeah, and it's like, you know, when you teach your kids musical instruments, I feel like sometimes that there's this idea that you're going to abandon this hobby if you don't end up becoming very high level in it. So like we've also it feels like we've divorced it completely and the same kind of with sports and stuff whatever It's like you try a bunch of these things as you're a kid But you know like if you're not awesome at it, like maybe you don't continue like maybe you don't so that the idea of just doing Something for like the joy and pleasure that it brings we've almost like forgotten about that It's like oh if this is gonna if you're not gonna be high level If you're not gonna be elite in this then like what's the point and like so The point of teaching kids this stuff and starting them at like, sometimes what I would consider like ridiculously young ages is to like get them to the end of being sort of elite instead of like having a skill that will bring you joy of like making music together.
[00:39:32] **Hirad:** And also, I think like part of it is like is the joy. And part of it is again, that like. I feel like your development as a person, when there is, when the real world is telling you something is very different from when you're abstracted away from it. Like when you learn something that is not negotiable.
Because you don't negotiate with physics, if you're building something physical, let's say if you're doing woodworking and like the pieces don't fit you, you have to kind of like, just give up any kind of preconception you have, and you got to accept that, right? Accept that reality that's in front of you.
Whereas the more we, again, like we, we are dealing with the worlds of abstractions I think this is partly why we get these kind of like the, the woke mindsets is because these people like who make assertions that are nonsensical, like how many genders are there? Or how many, like what is violence?
And they try to like, redo language because to them, everything is just language because there's, they never actually got
any
[00:40:33] **Trish:** in the face.
[00:40:33] **Hirad:** feedback. Exactly. That's real world getting punched in the face is real world feedback from other people, but it could also be from, from, you know, just it's, it's actually one thing where the one field that has been very resilient against this kind of mindset has been engineering because that's the, that's the one field where you get very fast, real world feedback.
You want to, you can't argue with words about whether a bridge is going to stand up or not. The bridge either stands up or it doesn't.
[00:41:04] **Trish:** Yeah. Your program runs or it doesn't.
[00:41:07] **Hirad:** Exactly.
[00:41:09] **Trish:** I know you actually just reminded me to of, you know, in sort of the techno optimist manifesto, it was talking about, you know, like people we were worried about starving. So we, you know, like invented technologies to like be able to produce enough food.
We had the green revolution. And so it starts with assumptions like that, that like I'm really on board with. And I was like, yes, like I love now. Like, you know, sometimes you read like just heart wrenching things and like hunter gatherer societies where like an Inuit woman would have to leave her baby in the snow because they knew that like they couldn't support that many people.
So like, obviously I am in favor of technology solving problems like this, but then it got down the line and it was like, We were lonely, so we invented the internet. And I like could not believe that those words like came out of someone's keyboard and they thought that this is like solving the problem of loneliness.
Like it's exacerbated it by I feel like 10 zillion times.
[00:42:06] **Hirad:** I, I just giving him credit here. I think this is just the desire to twist things into a particular narrative, very linear narrative that can, that can kind of like fit in people's brains. Comfortably, I, I, I'd be shocked if he actually believes
[00:42:22] **Trish:** you even write
[00:42:24] **Hirad:** loneliness or was designed to solve loneliness.
[00:42:28] **Trish:** Like, it just, like, that's, I feel like when I was like, Alright, now we're in La La Land completely.
[00:42:36] **Hirad:** But what, what, what was particularly kind of depressing so there's all these elements that we talked about. Why like the techno techno optimist manifesto kind of rub one the wrong way. But when I read this piece on kind of the, the right wing progressives, one of the things that I kind of realized that was extra depressing was we don't actually have a. Any side for freedom here because the kind of mindset that produces the techno optimist manifesto, like we talked about, like it's, it's a moralizing thing, right? It's a, it's, if you're not supporting AI, you are literally killing people. This is exactly the kind of mindset that, you know, we saw again during COVID, where if you don't do this thing that I think is the right thing to do, you're literally killing people.
And Oh, by the way, when we make this Great leap of logic that means we can justify anything we can justify anything in, in the form of like totalitarian control in the form of tyranny in the form of throwing out your, your fundamental rights. And. And this actually, by the way, there's this other parallel going on here where our, our good friend Yuval Noah Harari, I don't know if you've seen this, where he gave a TED talk where he's insisting that all these things we call like human rights they're not real things.
They're just human made stories. And so he kind of like kind of pulling the it's kind of picking at the foundation of what we might consider human rights that are inalienable. And it's basically saying, it's just stories. We can create other stories. So,
[00:44:16] **Trish:** I mean, we can't, like, sorry, is he wrong?
[00:44:23] **Hirad:** he, I mean, he's, I would say he's blasphemous.
[00:44:29] **Trish:** I mean, I don't like what he's saying. I don't like, but, you know.
[00:44:34] **Hirad:** he's technically not wrong. It's just that the, but he's, he's, he's also, I forget the rest of what he was saying, but he's also not saying that in a vacuum. It's saying that in like
[00:44:43] **Trish:** Right.
[00:44:44] **Hirad:** Don't get, don't be too attached to these things because they're not that big of a deal, which now when I, again, in the context of everything that's happened in the last four years, to me, nothing is everything has kind of lost its innocence at this point.
So, like, this is not just like a TED talk. This is. This is manufacturing consent or planting the seeds of something, of some way of thinking that opens the door. And this is why I kind of, I feel like our moral codes are so important because our collective moral codes, which we've lost for a lot, for.
You know, for the most part, in terms of losing religion our collective moral codes kind of dictate what is okay and not okay. And there was at least at some point this, even when, even in the absence of religion, there was this idea of inalienable rights or like God given human rights. And since we don't believe in the God and we don't believe that the rights are inalienable, it's like, well, like, what do we actually have to hang our hats on?
Like what, what is actually preventing complete. darkness from descending on humanity.
[00:45:52] **Trish:** I sort of love when you, like, Duck into religious speak from, like, a non religious person. I don't know why.
[00:46:00] **Hirad:** I mean, I actually like I'm stopping. I think you've kind of observed this transition. I have to stop calling myself non religious to some extent, like just in
[00:46:09] **Trish:** Well, you don't practice any religion, so you are non religious.
[00:46:13] **Hirad:** I don't, but I'm like really trying to like, try, I, I, I have a deep desire to change that partly because of all of these things that we talk about. I found that there, I feel like kind of, it's like the, my joke is that I feel like God's entering my head through the back door, like It's like all these things that I'm like sensing in, in the world and like these things that kind of it, well, to, to, to my extent, it's like, well, I'm sensing a lot of evil things, kind of like these things where it's like, well, now, God is dead.
We've kind of taken the foundation out from everything. Everything is just the story, which means anything anyone can do anything they want. And well, I don't think that's going to lead us anywhere desirable at all. Like, so I'm kind of feel like I'm staring at that evil and I'm, and I can know, I can be pretty sure that that's evil.
I don't know what's on the other side of it, but on the one hand, it's like, well, there better be something. Cause like we're, if there isn't, we're screwed. But the other thing is like, the more I try to like articulate some, some of what I'm feeling or like learning in life. I feel like the most concise way that I can like all this like religious sounding language comes out just because I'm like, I would have to write paragraphs and paragraphs to like, try to describe something.
Whereas like, This one phrase or just like one abstraction of God seems to have solved it. And in that sense, I'm like, well, that's just, there's definitely some grain of truth there. I'm coming around to like the Jordan Petersonian idea of what is true is. I don't know if it's like objective reality true, but there's definitely.
Maybe, yeah, it even is objective reality, but it's not an objective reality that we can fully grasp. So we kind of reach for these, like, abstractions. But, but I still have, like, a problem attending church because I don't literally believe that a person was literally dead and then literally resurrected.
Or or, or many other, many other stories.
[00:48:12] **Trish:** Yeah. No, I get it.
[00:48:15] **Hirad:** Maybe they're just stories.
[00:48:17] **Trish:** I get it. Yeah, it's a weird leap that they made that sort of, like, All technology is good. I don't really understand how they can like gloss over sort of just like a separate morality of like, you know, you can implement anything for good or evil purposes. It's a weird, it's a weird stance that they like, really, they don't engage with it at all. Like, they don't talk
about like,
human dignity or like reducing, I guess they talk about like reducing suffering a little bit, but like not in the ways that I feel like resonate with me.
[00:48:56] **Hirad:** Now, to me, this is like an absolute mirror image of, of wokeness. It's a little bit more, it's, it's just as cavalier, I guess. Just as aggressive. About what it believes. It's just more kind of along the lines that I would agree with.
I would be tending, tending to agree with, but it's still a very aggressive, very, very kind of fundamentalist. It's a, it's a fundamentalist. view of the world. The, it's fundamentalist about technological progress. I think, and I think that's kind of, yeah, that's kind of what we're kind of maybe detecting is, is that kind of way of thinking?
[00:49:36] **Trish:** Does it worry you? Reading stuff like this? And so, really, it doesn't, I, on the other hand, it worries me zero percent. Not worried at all. To be honest, so I read this other book this summer called The Myth of the Left and Right and I feel like the vast majority of people, so a very small percentage of people are like analyzing political philosophies and aligning themselves with it, but most people it's kind of just like weird tribalism
[00:50:04] **Hirad:** Yeah.
[00:50:04] **Trish:** and I kind of think that this is a little too off base for people to get like really on board with.
But that's just sort of my own speculation, that that's not like a, like a, that's an argument I absolutely cannot support because it's sort of just based on my gut feelings, but I just feel like this is, this is a few people who like buy too much of their own bullshit.
[00:50:27] **Hirad:** So the, I think, so I'll be, I'll be the absolute techno optimist, pessimist here. They do buy a lot of their own bullshit, but these guys are not like, They're not lightweights. Like these are people that are taken seriously. Like on, on the flip side of this, there were this guy who was the, who Andreessen would not like his name is Eliezer Yudkowsky.
He is one of the, he's like the head honcho of the decels of the. Artificial intelligence decelerationist, he, and he's the guy who wrote this piece in, I think it was time magazine or maybe the economist, but it was some, some major publication about the need for airstrikes on data centers.
Should they, we should have a non proliferation treaty for AI. And then if someone violates it, there should be airstrikes on data centers. I think that was the argument. That guy is. And he's taken seriously. I don't understand why. Cause if you just watch him watch him in a video, it's clear.
This guy is on many spectrums. And I don't know, should I wouldn't want to talk to him at a party. But, but then he publishes this piece in major publications. He gets referenced in many places. He becomes kind of like a, his way of thinking becomes an inspiration for. This letter to the white house calling on a six month moratorium signed by major scientists.
Now, some of these people that signed it, they have a vested interest. Like the, there was a, I think, I don't remember if it was Sam Altman, who's the CEO of open AI, but there's definitely lots of people from open AI, lots of people from these companies who are at the leading edge of artificial intelligence, asking everybody else to not get into their industry.
You know so that, that's kind of like, I think those people are in it for their own self interest. But at the same time, there's all these other serious scientists and, and technologists who are buying, you know, Eliezer's story, which is completely fab. It's, it's, these people, it's like a whole new thing that I'm noticing.
I remember we read like science fictions and we, we kind of realized that all these like studies are fake. Now they're, these people aren't even like basing their. Ideas on fake studies. They're basing them on like thought experiments official officially. They're just thought experiments. They just sit there and they They have some thoughts and then they publish it as if this is reality.
This is the whole like artificial intelligence is going to be an existential risk thing for us. Cause there's a lot we don't understand about artificial intelligence, but someone imagined this scenario. So based on that, they're kind of advocating airstrikes on data centers.
[00:53:01] **Trish:** I mean, okay, yeah. So obviously the airstrikes on data centers is crazy, but it's not, I mean, like we could definitely create something that would kill us or destroy us. I mean, that's not out of the realm of possibilities at all, right? Like.
[00:53:14] **Hirad:** yeah. And so now we've got these two. This is the part that's your, to answer your question of like, does it worry me? This is what it worries me. It's like we've got the, the accelerationist idiots where they think it's the ultimate moral good. And
[00:53:30] **Trish:** Yeah.
[00:53:31] **Hirad:** I do think again, to get, I'm going to give Andreessen a little bit of credit here.
I think he's deliberately overstating his case. Because he wants to push back because he thinks like the, the the majority opinion is against them and he wants to kind of like penetrate through that consensus. But, but we've got like this one dystopian version over here and then we've got another dystopian version of these like decelerationists and the consequence of what they're advocating.
Is they want people to stop developing artificial intelligence, which is, of course, never ever going to happen. It's kind of like when when they try to ban gain of function research in the United States, and then we just like exported it to to China. And we're like, here, you go do this gain of function research in your low safety and questionable safety standards lab, right?
[00:54:20] **Trish:** Yeah. I'm going to say I'm a really dumb thing and I know it's really dumb, but I kind of feel like it's also just like a fallacious argument to think that like the stuff can be stopped. Like, it's not usually just, like, one genius going out, you know, like, Terminator 2. It's like, you kill the guy, nothing happens, because he's the only genius that can bring this to fruition.
It's like, you've got, like, hundreds, maybe thousands of people working. It was like the nuclear bomb, right? Like, if Oppenheimer and those guys didn't do it, it's not like we wouldn't have any nuclear technology at all, right? Like
[00:54:50] **Hirad:** right. You know, like, this stuff, like, maybe you get it a little earlier, but, like, it's a foregone conclusion in my mind that this stuff is going to happen.
That's, that's exactly right, and so what is going to be the consequence of all these people that are advocating for deceleration? Again, coming back to this whole thing with manufacturing consent, what they're doing is they're basically laying the groundwork for the government coming and saying, Oh, hey, this thing is way too dangerous.
So all of you better not have it. We're just going to have all of it. And the, the thing that is particularly scary is because artificial intelligence is such a powerful technology it would be insanely scary if only the government had it and access to it was not democratized because the kind of thing that we are talking about We're talking about God like technology in the sense that there was a point where you could have some privacy in your home or privacy in when you're just out and about in your day.
Then there was CCTV cameras. So now you're like tracked everywhere you go in public, but CCTV cameras require someone to be sitting there and watching the footage, or at least like tracing people through different, different parts of the footage and technology can help with that. Now with artificial intelligence.
You can be aware of everything all the time. The only limitation is like your compute power and your energy, which are real limitations. Don't get me wrong, but that is a scary amount of power. And this is just one of the things that in terms of like PsyOps, in terms of propaganda, effective propaganda, in terms of all kinds of things, it's a, it's a very worrisome thing to say only the state can have this technology, which is, I don't think that decelerations know this.
They don't realize this, but this is effectively what they're advocating.
[00:56:35] **Trish:** Right. Yeah. Well. I feel like once upon a time, I feel like 15 years ago, I might have been a techno optimist. Actually, I feel like I probably definitely would have been.
[00:56:44] **Hirad:** Man, I feel like five years ago. Yeah.
[00:56:46] **Trish:** Do you have anything else that you feel like stuck out to you, that you wanted to add, that you
[00:56:51] **Hirad:** I'm glad we actually got to talk about kind of the broader picture of AI as well. Because, so one of the things that is shocking to me is I use chat GPT like every day. I basically, I almost very rarely use Google anymore.
[00:57:09] **Trish:** Hmm.
[00:57:10] **Hirad:** I don't, I don't need it. It's because Chad GPT kind of does most of the jobs.
But sometimes I talk to people and they're like, they just aren't even aware that this thing is going on, let alone be aware of like the, the pro and con arguments about like AI acceleration or deceleration. But these are profoundly important topics right now. And, and just the fact that I, like, I, I'm perceiving that a lot of people who are not in tech are not really realizing what we're dealing with in terms of like the magnitude of change that's coming with, or like as already here with GPT, because they go and like, they try the free version and they're like, Oh, this is nothing special because the free version was, is the dumb version.
It's not the thing that you can.
[00:57:54] **Trish:** Do you
pay
for
[00:57:55] **Hirad:** lot of things I pay for it. Yeah, I pay
[00:57:57] **Trish:** How much is
[00:57:58] **Hirad:** like 20 bucks a month 20 bucks a month for the latest
[00:58:01] **Trish:** So, like, what type of things are you, like, typing in there that you, like, I'm just, I'm so curious as someone who, like, I literally have never typed anything into chat GPT in my entire life. Astridized
[00:58:11] **Hirad:** two examples that I used for this episode just before we started there you go. This is actually a great thing for us to cover as well. Cause Mark Andreessen in his techno optimist manifesto talks about like, we're not utopians because we believe in like Thomas Sowell's.
idea of constrained visions
[00:58:30] **Trish:** version of Thomas Sowell, may I add. I was like, don't,
[00:58:35] **Hirad:** of
[00:58:36] **Trish:** don't you kind of
[00:58:36] **Hirad:** techno optimist manifesto. Yeah.
[00:58:39] **Trish:** of a bastardized version?
[00:58:40] **Hirad:** So I didn't exactly know, cause I haven't, I haven't read Thomas Sowell yet. And I didn't exactly know what he, what his constrained vision idea is. And I could go Google it and read a bunch of, you know, random things. Cause I'm not going to go read the
[00:58:54] **Trish:** well, he, you're not, you're not going to read the book? Hurrah. Right.
[00:59:00] **Hirad:** I, we've been trying this with other episodes. I wasn't going to try it for this one. but so I just went to chat GPT and I said, please concisely explain Thomas soul's concept of constrained vision. And boom, I have it in like a, I don't know, 300 words, 200 word description that gives me the high level overview and I can keep probing and going into different things and it'll just give me more and more detail if I want it.
Another thing was the history of eugenics. I knew it had happened, but I didn't know about, like, the details of what had led to it. What was some of the details of the, of the implementation? What exactly was the, what was the name of this case? The Buck versus Bell case.
That was that was a case where the Supreme Court upheld the constitutionality of of these eugenics laws.
[00:59:48] **Trish:** Yeah. Planned Parenthood. Nice little relic from
[00:59:52] **Hirad:** I didn't, I didn't get into Planned
[00:59:54] **Trish:** Oh yeah, no, so that's, Planned Parenthood originally, that's why they were so into like managing births and safe abortions and everything
was for, yeah, Margaret, it was Margaret Sanger, right? She was the, she was a big eugenicist. right, right.
Yeah, they've got a little bit of a, a checkered history.
[01:00:11] **Hirad:** Ooh, fascinating.
[01:00:14] **Trish:** Mm hmm.
[01:00:15] **Hirad:** So yeah, it didn't, well, it didn't tell me that. It doesn't always tell you everything, but but yeah, I, I'm in, I'm also in tech, so like I do use it for a lot of programming stuff all the time. I know someone I was just talking to recently, he's like a financial analyst and he basically just gets it to do, to write really complex Excel things that he doesn't understand.
It's like, this is like, I, I, I, I'm good at Excel, but I'm not that good at Excel. So he gets, he just gets like more complex things and that he doesn't know how to do, but he knows how to, like, if there's an error in it, when, when Chad GPD generates it, he knows how to fix it. And I do that with my, with my work as well.
Like now I have no limitation as a, as a software developer. I have no limitation on what kind of technology stack I can work with because I know the concepts. I just don't know the details of one tech stack, but I just take it on. And I know that I have this superpower assistant. That can just give me all the details and I can tell it, I wanted, I want to be doing exactly this.
Just show me how to do it in this particular tech stack that I'm not familiar with.
[01:01:16] **Hirad:** Instead of like searching through documentation for ages and trying to assemble the concepts in my head. I just outsourced it to chat GPT.
[01:01:25] **Trish:** So if you type in, like, let's take the Thomas Sowell thing, for example, you type it in, how long does it take to kick that back to you?
[01:01:33] **Hirad:** Five seconds, 10 seconds.
[01:01:35] **Trish:** Really? Oh, that fast? Wow. Ha!
[01:01:38] **Hirad:** it's very fast. Yeah.
[01:01:40] **Trish:** I'm gonna start using chat GPT.
[01:01:42] **Hirad:** Anything else to add from your side before we wrap
[01:01:46] **Trish:** so. I feel like it's a good, a good length. We kind of hit on some stuff, well, thanks for joining us today, listeners. We'll be back again soon.
[01:01:54] **Hirad:** Talk to you later.