#35: Why We Need A Cryptographic Internet 🔐🌐

Crypto Biden 🤖 & Sora OpenAI Video Generator 🍿

Last week, the White House put out a comment that they are working on “cryptographically verifying all communications from the White House”, including written statements and video.

This week, OpenAI launched their new generative video AI platform: Sora.

Coincidence? Maybe.

I think not 👀

Either way, the US government clearly recognizes the need to validate information shared online via their communication channels in a more secure manner than posting from social media accounts.

This message may come as a surprise to many people after frequent negative U.S. commentary on “crypto”. I have one word for you: deepfakes.

If you think deepfakes aren’t a big deal, I urge you to recall the 2016 election. Without getting into the political side of things, it’s essential to know that Facebook & Cambridge Analytica skewed the democratic process in the US.

I repeat: the democratic process of one of the most powerful nations in the world was impacted by people creating fake news and publishing it on Facebook.

In 2016.

It’s now almost a decade later (🤯) and it’s not overseas parties creating fake news, it’s anyone with an Internet connection and access to generative AI (aka almost everyone).

Literally anyone in the world can make it appear as though the leader of a country is saying or doing something that they never said or did.

It’s always been difficult to trust the truth of what you read online, requiring you to do your own due diligence (DYOR – do your own research).

Now, it’s becoming difficult to trust WHO is sharing the information in the first place.

How can we build trust online?

The White House seems to have finally acknowledged an answer: cryptography.

Recalibrating Recap 🧭 🧠 ✨

Welcome to Recalibrating! My name is Callum (@_wanderloots)

Join me each week as I learn to better life in every way possible, reflecting and recalibrating along the way to keep from getting too lost.

Thanks for sharing the journey with me ✨

Last week, we touched on three major issues with traditional centralized social media (web2) and how decentralized social media connected to blockchain (web3) provides solutions to these web2 problems. I also explained the Farcaster Protocol and Warpcast Client as the current web3 social solution that I am most excited about.

This week, we are going to continue by discussing why verifiable communications online is essential for the future of using the Internet.

I am serious.

Many of us regularly use our digital identities to engage with others online. If we can’t trust those communications, how can we continue to use the Internet in a safe and consistent way?

For more context on digital identities and building intangible value, please see my last YouTube video:

I also have some exciting news about a new element of Recalibrating 👀 I’ll share the announcement at the end, since it ties into what I am talking about today and it will make more sense to you after reading this entry.

The Bigger Picture Of Web3 & Cryptography 🔐🌐

Crypto has a bad reputation for scams and moneymaking schemes. I get it. This reputation is completely fair given how many scams and dodgy people have been involved with crypto over the years.

But there’s so much more to cryptographic engagement online than what people think of when they hear “crypto” or “cryptocurrencies”.

Just because there are bad actors in an economic system does not mean that the system itself is flawed. With money comes people who try to cheat the system and scam others. It’s human nature.

I wish it weren’t, but it seems to be 🤷‍♂️

Sure, the money is interesting from a monetization standpoint for creators; it’s one of the reasons I’ve been interested in NFTs, cryptocurrencies, and web3 platforms generally.

But money isn’t everything.

The cryptographic picture is SO MUCH BIGGER than you are thinking.

It’s a revolution of the Internet, introducing authenticity to our communications and removing centralized power from the mega-corporations and their algorithms that are so manipulatable they can impact democracy in America.

It’s less a question of money and more a question of authenticated identity-based communications.

The money only becomes interesting because of the ability to verify identities online.

The Bombing Of The Pentagon

On May 22, 2023, an image was posted of the Pentagon on fire after presumably being bombed.

People freaked out, the stock market took a hit, I assume people were contacting their loved ones in a panic.

The world lost its mind for a moment…

… only to realize that the image was fake. (note I did not repost the original because I do not want to propagate fake information of the Pentagon being bombed).

Fake Pentagon Bomb Tweet.png

People saw the smoke, they saw who posted the image “BloombergFeed”: a verified account on Twitter (now X) and they freaked out.

The real Bloomberg account just has the username of “Bloomberg”, but how many of you knew that with certainty? 🤔

The issue is that people see something online and they do not DYOR (do your own research).

It’s so easy to just press retweet, share to a story, send to a friend, etc. By the time people actually realize that the information is fake, potentially millions of people have already seen it. The damage has been done.

My guess is this was an example of stock market manipulation, since the poster was BloombergFeed and Bloomberg’s audience is often financially-interested people.

As Andy Campbell notes above, this problem arose because X introduced a “pay-to-verify” system. If you pay to get a premium account on X, you get a blue verified checkmark.

The idea is to get reduce bots and increase trust, but in reality, it just means that people are more likely to fall for scams by clever posters because it is “verified”.

Remember That Time Obama Trashed Trump?

In 2018, Obama published a video statement calling Trump a complete and total dipsh*t.

… or did he?

The video was part of an awareness video by Jordan Peele to increase people’s understanding of the nature of deepfakes.

Deepfakes can be defined as:

synthetic media that have been digitally manipulated to replace one person’s likeness convincingly with that of another. Deepfakes are the manipulation of facial appearance through deep generative methods.

Effectively, deepfakes are when images, videos, or audio have been manipulated to try and convince people that someone did or said something that they didn’t.

Typically, we think of them as manipulating the facial expressions of a person who is speaking. We tend to trust video as a source more than other media, because, I think, we see human facial expressions as an indicator of empathetic truth. We trust video more.

In hindsight, the video quality above is quite poor and you can clearly see now that Obama’s mouth was manipulated in the video to match the speech.

But this was 6 years ago. Generative AI didn’t exist.

Imagine modern deepfakers spreading information about war updates, the use of nuclear weapons, the political commentary of candidates for an election…

There is no end to the possibilities of how artificial intelligence can be used to manipulate what we see online.

If you think you will be able to tell the fake from the real, I beg you to think again. Naivety is not a good trait when the stakes are this high. It’s better to assume falsehood unless otherwise proven true.

Don’t trust, VERIFY.

Especially now that generative video AI has drastically levelled-up.

Sora Generative Video AI – OpenAI

Enter Sora by OpenAI. I recommend watching the intro videos on the OpenAI website, the resolution is impressive.

Sora was released 2 days ago and has made a huge stir across the world. Some people are freaking out over the potential job loss implications of having an AI generate perfect video for you. Others are in disbelief at how good it actually looks.

Personally, I am more concerned on an existential level of the future of the Internet and human communication.

If it’s this easy to fabricate a person doing and saying whatever you want them to, how are we ever going to tell the true stories from the fabricated ones online?

The Internet has always been a place where you had to DYOR to see if you could trust the content. It’s about to get so much harder to tell.

Sora & Safety

Now, OpenAI did include a section on the Sora home page that explains how they are planning on introducing safety features to help with the creation of AI content (aka avoiding misinformation & fake news).

One of these features is introducing a metadata tag in the file itself that says “this video was created by AI”. This feature is similar to what Adobe has been doing with content credentials for Adobe Firefly (Adobe’s generative art software, like Dalle or Midjourney).

However, this feature is very easily bypassed. All you have to do is screen record or screenshot the image on your computer and the metadata is lost. Someone can then repost the image or video and you would never know that it originally had AI tagging.

Another safety feature is that the Sora model is currently not open to everyone. OpenAI is working with red teamers, people who are “domain experts in areas like misinformation, hateful content, and bias”. Effectively, red teamers will be trying to get the Sora model to create outputs that would be bad for humanity.

My guess is that they would be testing things like how easy is it to fabricate:

  1. political statements
  2. war news
  3. anything that could get anyone cancelled
  4. general hate media and negative misinformation

OpenAI is also “building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora”. In other words, if a video is posted somewhere, a badge or tag could appear that says “this was generated by Sora”.

While these features are critical components of AI safety online, they do not do enough to solve the current flaws of web2 social media.

Tracing Content To The Source

Tracking the type of content is not enough to solve the problems we will see with AI video generation.

People will create video in Sora and then modify the video as needed afterwards. People always find a way to get around features like content credentials for the underlying content when posting in web2 social media.

Furthermore, we cannot assume that current social media companies will implement the AI-detection tools OpenAI builds. Walled gardens require API use and usually charge a lot for access already. I would be surprised to see Meta or X give OpenAI free reign with their APIs.

What matters more, in my opinion, is tracking not just how the information was created, but who shares it in the first place.

What was the first instance of this content online? Who posted it? Where?

If the digital identity who posted the content is one that you have independently verified as being a source of trustworthy information, you can link the poster with the content and trust is built between the poster and the consumer.

For example, if the White House posts content on X and verifies that they were the ones who posted it, you can likely assume that the content has not been altered by a third party.

However, what about on Instagram? Facebook? Threads? Mastadon? Pinterest? (not that I expect Biden is out there pinning). If the White House does not have the same username across all social channels, it is possible for someone to repost the original content with a slight modification and fool the audience into think that it is authenticated content.

This issue is one I refer to as fracturing of the self: digital identity is split across the walled gardens of social media, with no verifiable link between them.

Solution: Cryptographic Authentication 🔑 🔐 🌐

Now, when I refer to crypto, I want you to take a moment to recalibrate your perception of it. Seriously. Approach this conversation with beginner’s mind.

I am **NOT** referring to crypto tokens, currencies, spams, scams, money-making schemes. Push those thoughts from your mind. 

Ready?

I am talking about crypto in the original sense of the Greek word. Crypto means “secret”. Cryptography is:

the study and practice of sending secure, encrypted messages or data between two or more parties. The sender “encrypts” the message, which obscures its content to a third party, and the receiver “decrypts” the message, making it legible again

I’ll get a bit into the technical specifics in a moment. For now, just consider what that statement means.

The White House has mentioned they are looking at “cryptographically verifying all communications from the White House”. This statement effectively means that the U.S. Government is looking at a way to encrypt their messages so that they cannot be falsified. But how does this work if we are supposed to be able to read or watch the encrypted message?

The answer is that the message itself is not being encrypted, it is visible to the public. The encryption comes in on the digital identity side: the White House will have a digital cryptographic signature that the public can verify and prove it really is the White House.

A cryptographic signature is at the core of modern “crypto” and much of blockchain use (at least, the public chains you’ve heard of like Ethereum and Bitcoin).

The power comes from cryptographic keys.

Public-Private Key Pairing

Most crypto has what is called public-private key encryption. I’ll give an overly simplified explanation so you have an idea on how it all works.

Note: creating crypto wallets comes with risk (you can lose the seed phrase – password – and with it all of your assets) so I highly encourage you to DYOR before getting started. Feel free to message me in the /cal channel or by email if you have questions. Safety is a priority here.

When you create a crypto wallet, you are not creating something that “holds cryptocurrencies”. The wallet does not hold Bitcoin or Ethereum. The wallet holds your private key.

When you create a crypto wallet, you generate two keys: a private key and a public key. These two are inextricably linked together. The private key is stored in your wallet, something that only you have access to.

The public key on the other hand is just that, public. It’s the front facing side of your wallet, kind of like a public bank number that anyone can see.

The idea here is that anyone can send assets (cryptocurrencies, NFTs, newsletters 👀) to your public wallet address and you will receive ownership instantly.

However, only you can transfer ownership of these assets from the wallet with your private key. You can think of a private key as a really, really strong password.

It’s an elegant solution to building a public system with private individual access.

Email Analogy

An analogy is email (again, back to email since it is a decentralized protocol).

If you put your email address in public, anyone can send you an email and you will receive it. However, ONLY YOU can send emails from that email address because you have the password to access your account.

A crypto wallet is effectively an account that lets you sign in (connect) to cryptographic systems in a way that others can see your public address but only you can sign communications with your private key (transactions).

Using A Crypto Wallet As An Identity

A good intro video into digital wallets and the web3 ecosystem by MetaMask.

Now, the reason I bring all of this up is because crypto wallets (public-private key pairings) enable a solution to the fractured identity problem I mentioned above.

With email, you can “sign in” to any web2 platform by creating a new password for that account, tied to the email address (public-private pairing). In some cases, you can even use something like Apple ID, Google, Facebook, etc., to create a password for you and then keep it in, e.g., iCloud keychain.

With each account, you can receive email communications from the web2 provider that you signed up with.

However, unlike email, a wallet does not create a new account for every platform you connect to. Instead, you can use a single wallet to connect to ALL platforms.

This ability to have a single identity source is at the core of what web3 decentralization means.

A Practical Example: Farcaster & Paragraph

Last week, I explained the concept of the Ethereum Naming Service (ENS). Effectively, I can redirect my wallet’s public key address to my ENS name: wanderloots.eth

So, if someone wants to send something to my public wallet address, instead of typing in ~30 random numbers and letters, they can just type in wanderloots.eth, and I’ll get it.

Another power of this ENS domain is that I can use the same username across all web3 apps.

An example of this is with Farcaster and Paragraph. I talked about Farcaster a lot more last week, so if you are confused at how it works, please check out last week’s entry.

Paragraph is a writing platform (both web2 and web3) that allows for newsletters and longer writing.

Using my ENS of wanderloots.eth, I was able to connect to both Paragraph and Farcaster (via Warpcast) and have the same username and identity. I didn’t have to create new accounts, I could just “connect wallet”.

Now, when I post on Paragraph and share to Farcaster, my audience can instantly verify that it is the same identity posting on both platforms, since they both have wanderloots.eth.

NO ONE ELSE CAN USE THIS NAME

I can’t emphasis the above enough. I have effectively linked my identity across platforms with a single web3 username.

This linking works for all of Ethereum-based web3.

All of my art is signed with this signature, all of my accounts are connected to wanderloots.eth. I have consolidated my identity.

Tying It All Together

Crypto wallets enable the linking of identity across platforms, which increases the security and trust my audience has in the content and art I produce. If I cryptographically sign a communication or transaction, everyone in the world can instantly verify that it is truly me signing it.

This type of signature is what the White House will likely be implementing, which is a great call given the upcoming election. We all know what happened last time…

This type of cryptographic signature is what will drastically help with identity verification in an AI deepfake world.

If you haven’t started paying attention to the digital changes that are happening, perhaps now is the time. If you are anxious about these changes, please see my article on mental safety from AI-nxiety (yes it’s a real thing).

Deep Dive”Podcast”

To help give more context on web3 and the future of social media, I hosted a space last week on X (kind of like a live podcast) and recorded the session where my friend The Hiena (incredible animator, highly recommend checking him out) asked me a bunch of questions about the future of social media and Farcaster.

I summarized the main points in the first 5-10 minutes of the recording and then continued the discussion after that. Note that you do need an X account to listen. I’m working on my podcast, stay tuned 👀

Here’s a link to the recorded space, I hope it helps ✨

Future-Proofing & Paragraph

The future is always uncertain.

With change being the only constant in life, recalibrating becomes the greatest skill.

Whether the U.S. Government will continue with their crypto plans… 🤷‍♂️ we’ll have to see.

Regardless, it’s important to start actively considering the trustworthiness of the information you see online.

To that end, I have an announcement: I will begin posting to Paragraph, tied to my web3 wallet (wanderloots.eth).

A snippet of my first Paragraph post displayed in a frame on Warpcast (enabling in-line, in-app reading)

I know my Substack entries can be quite long, with complex information spanning many topics. I refer to this as long-form content.

Paragraph posts are complementary to my Substack newsletter and not meant to replace it.

My Paragraph posts will be different: medium-form content, shorter and more digestible. Think of them like Instagram Carousels or Twitter (X) Threads.

I’ve had many requests for shorter content to help people understand individual concepts better, and this is my solution 😊. My goal with this split is to have two segments of my weekly writing that helps people understand the bigger picture. People seem to be liking it so far. I really appreciate the feedback ✨

I will be writing on Paragraph 1-3 times a week and they will be sent to the email address you used for Substack. If you do not wish to receive the Paragraph posts, please, absolutely feel free to unsubscribe from the Paragraph emails. It will not affect your Substack subscription 😌

Please also check your spam folder this week, as it’s possible the Paragraph post was flagged. You can click “allow sender” so it doesn’t go to spam in the future.

I have been looking for an outlet to write shorter, condensed content and am extremely excited to be sharing more ✨

Next week

I’ll be sharing a few medium-form posts through Paragraph, I hope you like them 😊. My plan is to mix things up between mindfulness, recalibrating, and worldbuilding (building online), but we’ll see how it goes.

Note: If you want to learn more about the impact of information manipulation online, please check out my law school paper on fake news, and the use of Blockchain technologies as a potential solution.

Stay tuned ✨

P.S. If you are interested in learning how I build my digital mind (second brain) to help me process information and identify patterns to solve my problems, please consider upgrading your subscription to paid. Your support means more than you know 😌 ✨

Paid subscribers get full access to Worldbuilding, a practical counterpoint to the theories described in Recalibrating. You also get access to a private chat and bonus explanations exclusive to paying members 👀

If you are not interested in a paid subscription but would like to show your support, please consider buying me a coffee to help keep my energy levels up as I write more ☕️ 📝


Book of the week: Read Write Own by Chris Dixon

I picked this book up and just started. I’ve heard great things from many people in the web3 space so I expect it to be a good read ✨


Photo of the week: Beautiful British Colombia

Available as a print on my Darkroom

Related Posts