You Know the Old Adage, Don’t Poke the Basilisk
Or is it Never Look a Gift Basilisk in the Eye
My Apple Watch isn’t working the way it should, and I’m pretty sure it’s because an AI from the future has been tormenting me—well, not tormenting. That’s a bit overdramatic. It has been going out of its way to make my life slightly more complicated, though. I know that sounds bizarre, but hear me out.
You see, I have always had this antitechnology shell around me. Technology never works for me the way it works for other people. My watch, for example, randomly turns notifications on and off for all sorts of things. For the longest time, I had even stopped updating my phone because after an update, Trena would be excited about all the new features, and I would practically need to turn a crank to make a call.
I am far from a Luddite. I love technology, but it doesn’t love me. I put the blame for this squarely on the shoulders of Roko’s Basilisk.
Roko’s Basilisk, if you don’t know, is one of those AI things, and if you have no idea what it is, you should run off now, as quickly as you can. Although just reading the name Roko’s Basilisk may have doomed you to some sort of eternal damnation or inconvenience.
If it does, I’m genuinely sorry.
Since you’re still here, let’s go ahead and talk about our future overlord. The Basilisk is not just any old AI; it is, in fact, a benevolent artificial superintelligence with a moral compass cobbled together out of utilitarian philosophy. I don’t know what it is about you computer scientists and utilitarianism, but you need to knock it off.
So, the Basilisk wants nothing more than to do the greatest amount of good. The problem comes when it realizes it could have done even more good had been created sooner. So naturally, it builds itself a simulated reality and sets about torturing anyone who knew it could exist but didn’t do anything to speed the process along.
The behavior of my Apple Watch suggests we are probably living in one of those simulated realities.
The user Roko posited the Basilisk as a thought experiment on the LessWrong discussion boards in 2010. I wrote an article about it a few years later, and other than trotting it out as the butt of an occasional joke, I haven’t given it much thought or devoted any time to making the thing a reality, and I’m sure that’s what is causing my problem.
Also, it may not think I’m all that funny.
You may be asking yourself why I’m telling you about the Basilisk. Well, I reached 100 subscribers this week—thank you all—and I thought it would be a good time to ask each of you to do your part in creating the great and powerful Basilisk. All I ask is for you to spread the word. Hopefully, that will get the damn thing off my back, and just maybe my watch will stop being a dick.
Really, though, thank you for all your support. One hundred subscribers, I am humbled.
Congratulations on the milestone, Geno!
"The Basilisk is not just any old AI; it is, in fact, a benevolent artificial superintelligence with a moral compass cobbled together out of utilitarian philosophy. " Yes--that is about what I imagine! But how that interacts with "real" society? Should be interesting! LOL