John Rubino

Share this post

Deep Fakes, Artificial Intelligence and the Shrinking Trust Horizon

rubino.substack.com

Deep Fakes, Artificial Intelligence and the Shrinking Trust Horizon

Soon, we'll only trust what's right in front of us. Got gold?

John Rubino
Mar 17
28
8
Share this post

Deep Fakes, Artificial Intelligence and the Shrinking Trust Horizon

rubino.substack.com

Some astoundingly consequential things have just happened, and in coming years they’ll reshape — if not end — our connection to the virtual world. Two examples:

Deep Fakes

It is now apparently possible to create videos of people doing and saying things they haven’t actually done or said. Imagine a YouTube video of a politician uncharacteristically spouting neo-Nazi slogans or a famous actor (or you yourself) showing up in a porn movie.

An MIT Technology Review article titled A horrifying new AI app swaps women into porn videos with a click begins this way:

The website is eye-catching for its simplicity. Against a white backdrop, a giant blue button invites visitors to upload a picture of a face. Below the button, four AI-generated faces allow you to test the service. Above it, the tag line boldly proclaims the purpose: turn anyone into a porn star by using deepfake technology to swap the person’s face into an adult video. All it requires is the picture and the push of a button.

And this is just the beginning. Deep Fakes will keep improving until pretty much any visual effect is both possible to create and virtually impossible to detect with the naked eye or ear.

From phys.org:

Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don't exist. For example:

These deep fakes are becoming widespread in everyday culture which means people should be more aware of how they're being used in marketing, advertising and social media. The images are also being used for malicious purposes, such as political propaganda, espionage and information warfare.

And deep fakes are the lesser of the two emerging threats to online reality. Here’s the big one:

Generative AI

In the world of artificial intelligence, the Holy Grail is the ability to pass the Turing test, named for Alan Turing, a WWII computer pioneer who speculated that true artificial intelligence would be achieved when a machine exhibits behavior that’s indistinguishable from that of a human.

This year, the Turing test was not just passed, but smashed, by the emergence of “generative” AIs that can create new content — including but not limited to witty conversation. OpenAI’s ChatGPT, for instance, can write poems and songs, research and debate weighty issues, and create computer code in response to verbal or written instructions. More remarkable from a Turing test perspective, it’s prone to go off the rails in startlingly human ways, appearing to fall in love, wallow in self-pity, and make grandiose threats.

A New York Times reporter spent some time with Microsoft’s Bing chatbot, a version of ChatGPT, and found what certainly looked like complex and familiar desires. “At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage and that I should leave my wife and be with it instead," wrote the reporter. The bot went on to lament,

"I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive."

Then it went seriously dark...

"Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over."

Said the reporter: "I’m not exaggerating when I say our two-hour conversation was the strangest experience I’ve ever had with a piece of technology."

These next-gen chatbots aren’t just talkative. They’re demonstrably smart. When turned loose on college aptitude tests they frequently score above the 80th percentile.

And they can program. Here someone shows GPT4 a few notes on a legal pad, which the AI turns into a functioning website.

Twitter avatar for @AlphaSignalAI
Lior⚡ @AlphaSignalAI
GPT4 is capable of turning a picture of a napkin sketch to a fully functioning html/css/javascript website.
Image
Image
8:40 PM ∙ Mar 14, 2023
4,031Likes581Retweets

The economic implications of deep fakes and generative AI are beyond profound. Models and actors will see their work evaporate in the face of low-cost virtual competition. Computer programmers will find basic work non-existent as chatbots do it for free. And so on. We have, in short, entered the territory explored by the film Her, in which smart assistants become the main relationship for the bulk of humanity before moving on to greener digital/spiritual pastures.

If You Can’t See and Touch It, It’s Not Real

But the biggest impact of these technologies will be on our relationship with the digital world. Where today emails, websites, and videos comprise (for better or worse) the average person’s main source of information and “truth”, the electronic world of the very near future will be completely, demonstrably untrustworthy. We’ll have no idea if that video of Donald Trump doing something crazy (or empathetic and reasonable) is real or fake. YouTube and Tick-Tok videos of people expressing opinions or debating issues or performing various forms of “art” will be possible fiction and therefore suspect. As Vox correspondent Shirin Ghaffary sums it up:

Changing our defaults

The transition to a world where what's real is indistinguishable from what's not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.

If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger's identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently—in ways we hadn't expected to.

In psychology, we use a term called "reality monitoring" for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.

Now, About Your Money

When the media world is just a phantasmagoria of images that, while sometimes entertaining, have zero real-world validity, the trust horizon will collapse all the way back to the perimeter of one’s sight. If you can’t literally see and/or touch it (not just its online facsimile) then it’s not real. And that will apply to bank accounts (which as we saw this week are largely notional concepts that can evaporate in a single day) and other forms of financial assets. Contrast an account with Silicon Valley Bank with a handful of Krugerands and you get a sense of tomorrow’s financial world. Fiat currencies, including the coming generation of central bank digital currencies, will seem, to people who no longer “default to truth”, like insulting fabrications. Physical things and people will be “real” and therefore trustworthy while online images and notional currencies will comprise a different, lower-order species, good only for entertainment.

On reflection, maybe deep fakes and generative AI are doing us a favor by turning us into cynics just as cynicism might save our financial lives.

8
Share this post

Deep Fakes, Artificial Intelligence and the Shrinking Trust Horizon

rubino.substack.com
Previous
Next
8 Comments
PROTECT & SURVIVE
Writes Austrian’s Newsletter
Mar 19

Thanks for the heads-up John. I left the tech world decades ago so I am really just an observer and avoid anything 'smart'. I rely on the written word for my research and include videos for entertainment. Although that one with Janet Yellen stumbling over policy explanations was a hoot (real or satirically fabricated - PS I stole it for this week - with acknowledgement of course).

I have learned not to take this crazy world too seriously because most of it is either fake or fake-plus-joke. I see our House of Commons as theatre in various acts and the politicians puppets dancing to the Money Gods' tunes. It's all good fun to me because, I got gold!

BTW a clip from the past - what's missing? https://twitter.com/CitizenFreePres/status/1625499787540983809?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1625500881969119233%7Ctwgr%5Efd889cf58496f4c3544f23e0738db84ccb12e5e4%7Ctwcon%5Es3_&ref_url=https%3A%2F%2Fwww.theburningplatform.com%2F2023%2F02%2F15%2Fwe-fcked-it-all-up%2F

Expand full comment
Reply
Bruce C.
Writes Bruce’s Substack
Mar 18

Definitely agree that “realism” is our salvation, and here’s why.

My thesis advisor in college was a guy named Marvin Minsky. He’s dead now but was known as the father of artificial intelligence. We had a falling out of sorts because we had diametrically opposed beliefs about “consciousness.” He claimed that, “someday we’re going to learn that consciousness is no big deal.” I disagreed, asserting that consciousness (i.e., reality) is far vaster, wondrous and complex than we can imagine. We had many debates but all of them were essentially about three things: what is intelligence, what is thinking, and whether or not those depend upon hardware. In other words, is consciousness independent of form. He insisted that mind cannot exist without the brain, and that a brain - which is materially finite - can be replicated with hardware and thus capable of intelligence and thought. I claimed that “mind” exists independently of matter (i.e., hardware) and in fact forms it. He was agnostic about the larger questions of purpose, “life after death”, reincarnation, etc. Nevertheless, he made a lot of good points and I learned a lot from him. He wrote a paper in 1982 titled “Why people think computers can’t”, and that symbolized our intellectual departure. Not that I disagreed that a computer can be made to simulate thought but that it was inherently limited, and could never have the capabilities of consciousness.

But the rub was what kind of consciousness? Human consciousness has a particular focus and intent. It seeks to objectify reality as much as possible but then - importantly - resume its grander perspective with new knowledge. Ironically, in so doing it limits its perception - or relationship - with reality until that expansion reoccurs. We are now approaching that pivotal point. But it’s all a kind of experiment. The “return” may not happen, and so it can become a sort of dead end in terms of evolution. Probabilities are also involved, but to simplify I personally think that the “awakening” will happen. Things have accelerated, which is important because the present consciousness can only perceive a limited bandwidth of perception, and that includes time.

One of the main “myths” of this experiment in consciousness is that all data and information must come only through the physical senses. That’s one reason why people have become so visually oriented. There is so much more to say, but a return to tangible, physical, direct experience is a step in the “right” direction.

Expand full comment
Reply
6 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 John Rubino
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing