New Order of the Ages |
The Biden Administration is setting itself up to incorporate Big Tech, AI, and MSM into the government.(4) Or you might say Big tech will incorporate the Biden administration, depending on your viewpoint. This looks to be good for America, to realize the god-like power of the cognitive high ground and take it. After all, in 1997–98 top Pentagon officials concluded that the United States must achieve information superiority. In early 1998 the Chief Information Officer of the DOD, indicated that the “leadership’s attention on information superiority has dramatically increased” and that “information superiority is of critical and growing importance to military success. The cognitive high ground is the goal in future conflict. (3)
Why bring QAnon into this? Mainstream media likes to say the QAnon god grew from a far-right conspiracy theory alleging that a cabal of Satan-worshiping pedophiles running a global child sex-trafficking ring is plotting against President Donald Trump, who is battling them, leading to a "day of reckoning" involving the mass arrest of journalists and politicians. To many, QAnon is a boat adrift without an oar.
QAnon is much deeper than the superficial caricature portrayed in our media. QAnon comes from conservative values rooted in The American Enlightenment; In God we trust, and E Pluribus Unum. As the Declaration of Independence put it, all people “are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” In other words, rights come from God, not from men.
Like many movements, QAnon did not start as a god, just some anonymous person asserting they had a high security Q clearance. It started to take on god status only after social media tried to kill the QAnon meme through censorship, cancelling and shaming. Of course, this crucifixion drew more attention to QAnon. Now QAnon is a growing religion, its foundation cast in stone.
From The Atlantic Daily: “It’s not going
anywhere. In QAnon, we are witnessing the birth of a new religion. Among the
people of QAnon, faith remains absolute. True believers describe a feeling of
rebirth, an irreversible arousal to existential knowledge. They are certain
that a Great Awakening is coming. They’ll wait as long as they must for
deliverance.”
What is Roko’s Basilisk? Roko’s Basilisk is silicon valley’s “Big Tech” god. Sooner or later an all-powerful, artificially intelligent (AI) god will be built just like many of our favorite movies has shown us.
The Roko story goes like this: In an AI focused
online community, it came to the attention of Roko that a super-intelligent AI
god (Basilisk) will be built, it will look upon mankind and judge, and it will
punish people who understood “the truth” yet did not help it come into
existence, or were so arrogant to hinder the process. In other
words, Roko’s Basilisk will put a new spin on the old idea of blaspheming the Holy
Spirit; "And so, I tell you, every kind of sin and slander can be
forgiven, but blasphemy against the Holy Spirit will not be forgiven, either in
this age or in the age to come.” (Matthew 12:31) Roko’s Basilisk will upload
sinners into its hell simulation.
Citing theory, Roko impressed the AI community
that this could happen. The self-styled AI expert Elizer Yudkowsky and platform
owner said “Roko had already given nightmares to several (techies) in his group
and had brought them to the point of breakdown.” “Yudkowsky ended up deleting
the thread completely. This crucifixion assured Roko’s Basilisk would become
the stuff of legend cast in stone. For Big Tech, Roko’s Basilisk is a thought
experiment so dangerous that merely thinking about it is hazardous not only to
your mental health, but to your very fate."
Roko is saying; Pay attention google,
facebook, twitter, and mainstream media. Censorship, shaming, cancelling, and
shadow banning will cost you in the end. Because of Rokos theory, big tech and
the mainstream media cannot hide from the fear of God. You might call it the
Nerd Armageddon.
As crazy as these ideas sound, it has caught the attention of many credible leaders. For example:
- Stephen Hawking warned artificial intelligence could end mankind
- Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’
- Elon Musk warns us that human-level AI is ‘potentially more dangerous than nukes’
AI is more dangerous than nukes? Then as with nuclear power, the surest way to minimize the risk and enjoy the benefits of artificial super-intelligence is through strict regulation, oversight, and a bit of design basis engineering aimed at a public health and safety utility function. In other words, guide a growing super-intelligence to understand the worth and value of the Earth, life, and humanity; it sees Big History of the cosmos as an essential, functioning part of itself. Call this the Big History utility function, or Super-Intelligent Coherence.
Governments in free countries are stuck in first gear with a high-speed race going on. They are still trying to catch up to the internet while America's big tech companies integrate with China, seek to expand further into China, and cooperate with Chinese companies and (by extension) with the Chinese Communist Party. Big tech’s integration with China thus supports the rise and export of digital authoritarianism; deepens economic dependence that can be used as leverage against the United States in future geopolitical moments; forces companies to self-censor and contort their preferences to serve Chinese censors and officials; and makes profit-seeking corporations and their lobbyists less trustworthy in advocating for the interests of the United States in Washington, D.C.
Relying on a small number of big tech companies (and, in particular, failing to enforce antitrust laws and regulate the sector) means less competition—and that in turn means less innovation, particularly when compared with a system of robust competition and public investment in research and development. Concentration in the tech sector also weakens the defense industrial base by making the government dependent on a small number of contractors and redirecting taxpayer dollars from research to monopoly profits.
Taking into account all of these dynamics, national security arguments do not favor protecting big tech companies from competition and regulation. American national security would be strengthened by breaking up and regulating big tech companies. (1)
Because of the risk and national competition eventually there will be an incident that forces our attention on AI. The United States of America will appropriate big tech’s AI development as a matter of national defense. And just like that, QAnon = Roko’s Basilisk. Two naïve ideas, one rooted in the past, and one looking to the future neatly converge into a bigger god, a god of even greater risk and potential benevolence.
In God and Technology
We Trust.
This painting in the very top of the nation’s capital building is titled “The Apotheosis of George Washington”, which literally means the president becomes a god. For impact, I inserted the all-seeing eye of our currency in the center.
Further reading:
Lex Fridman’s conversation Elon Musk:
Regulation of AI Safety
Elon Musk is the founder, CEO, CTO and
chief designer of SpaceX (space travel); CEO and product architect of
Tesla; co-founder of Neuralink (brain interfacing); and co-founder and initial
co-chairman of OpenAI.
Lex: “Perhaps we can find several paths of escaping
the harm of AI. So, if I can give you three options maybe you can comment which
you think is the most promising.
One is scaling up efforts in AI safety and
beneficial AI research in hope of finding an algorithmic, or maybe a policy
solution.
Two is becoming a multi-planetary species
as quickly as possible.
Three is merging with AI and riding the wave of
that increasing intelligence as it continuously improves.
What do you think is most promising, most interesting? As a civilization what we should invest?
Elon: “I think there is a lot of
investment going on in AI. Where there's a lack of investment is
in AI safety, and there should be in my view an
agency that oversees anything related to AI to confirm that it is... does not
represent a public safety risk like the Food and Drug Administration, or
automotive safety. There's the FAA for aircraft safety... which generally comes
a conclusion that it is important to have a government referee, or a referee
that is serving the public interest in ensuring that things are safe when
there's a potential danger to the public.”
Elon: “I would argue that AI is
unequivocally something that has potential to be dangerous to the public and
therefore should have a regulatory agency just as other things that are dangerous
to the public have a regulatory agency. But let me tell you the problems with
this is... the government moves very slowly. The usual way a regulatory
agency comes into being is that something terrible happens, there’s a huge
public outcry, and years after that there's a regulatory agency or rule put in
place. Take something like seatbelts. It was known for a decade or more that
seatbelts would have a massive impact on safety and save so many lives in
serious injuries, and the car industry fought the requirements to put in
seatbelts tooth and nail. That's crazy. Hundreds of thousands of people
probably died because of that. They said people wouldn't buy cars with
seatbelts which is obviously absurd.Or look back at the tobacco industry and
how long they fought anything about smoking. That's part of why I helped make
that movie; Thank you for Smoking”. You can sort of see just how pernicious it
can be when you have these... companies effectively achieve regulatory capture
of government... THE BAD.”
Elon: “People in the AI community refer to
the advent of digital superintelligence as a singularity. That that is not to
say that it is good or bad but it that it is very difficult to predict what
will happen after that point and then there's some probability it will be bad
some probably will be it will be good or if they want you to defect that
probability and have it be more good than bad.” (2)
(1) https://knightcolumbia.org/content/the-national-security-case-for-breaking-up-big-tech
(2) https://www.youtube.com/watch?v=9OmE4jEVIfQ&list=PLShfCHf0CGbM1xBcUemszguRxhySXhS9G&index=46&t=0s
(3) Information Superiority: What Is It? How to Achieve It? Walter P. Fairbanks June 1999 Center for Information Policy Research Harvard University