#24: Self-Worth ≠ Economic Value

Developing Original Value In An AI World

Updated News This Week

The hype of a potential “threat to humanity” by OpenAI seems to be subsiding as the company stabilizes with Sam Altman back as CEO. In other words, it’s been a few days without drama so people are already forgetting the panic caused by potential “AGI”.

Critics note that if OpenAI had truly generated artificial general intelligence (AGI), it would have been difficult to keep under wraps and we would have heard more about it by now.

The only information we have received on what may have caused the dramatic corporate events is an updated AI model called Q* (pronounced Q star, maybe).

Instead of speculating on whether or not OpenAI has indeed created the seed of what may eventually grow into AGI, I thought it would be more beneficial to briefly look at Q* and see what the fuss is all about.

After reading numerous articles, the hype seems to revolve around one answer: an AI model that can do math properly.

But why does it matter if AI can do math? What does this have to do with a possible existential threat to humanity? How does this impact the future of value in our society?

Let’s dive in ✨

Recalibrating Recap

Welcome to Recalibrating! My name is Callum (@_wanderloots)

Join me each week as I learn to better life in every way possible, reflecting and recalibrating along the way to keep from getting too lost.

Thanks for sharing the journey with me ✨ If you find this newsletter helpful, I would greatly appreciate if you could share with a friend who may also find it valuable 😊

Last week, we touched on the shifting values in our human society, away from overloaded logic and towards underloaded creativity.

This week, we are going to continue by discussing what it means to be original in an artificially intelligent world and how we can reframe value in such a rapidly changing system.

I’ll include some context on originality itself and some practical tips on how to identify and develop your self-originality.

The Bigger Picture (why you should care)

An AI system that can do math, so what? Why is everyone freaking out over AGI and the future of humanity?

Let’s look at an AI system you have heard of many times over the last year (by the way, its birthday was Thursday). ChatGPT is a large language model (LLM) based on deep learning and transformer technologies.

These technologies are excellent at pattern recognition, but not at reasoning. Think of them as very smart text predictors, based on past writing that they have been trained on, guessing what you would like them to say next.

An AI model that is able to perform mathematical calculations moves beyond mere text prediction. A mathematical AI model will be on its way to developing reasoning capabilities.

Reasoning and rationality are what humans used to dominate the animal kingdom to reach the alpha species spot, so it makes sense that a technology capable of superior reasoning is cause for alarm.

Aside from the moment OpenAI announced that they were dedicating vast resources at creating AGI, this is the first time I can recall such existential hype over AGI being generated in the media.

Mathematics form a crucial underpinning of the operation of our modern society. All computing, cybersecurity, economic systems (banking), social media, and more are run by algorithms based on mathematical decision-making principles.

I think the fear is that, if we are producing AI that can properly reason through use of mathematics, there is a risk that all of these systems will be manipulatable. If you want to learn more about Q* and the theorized technical developments that may have led to the hype, this article provides an in-depth analysis.

AGI or No AGI, That is NOT the Only Question

Whether or not we are close to achieving AGI (I honestly have no idea, and neither, it seems, does anyone else), there are two other points I think we should consider in the meantime.

Firstly, “AGI” is likely going to be a continuum, a scale of increasing AI capabilities from basic personal assistant to… who knows?

What does it mean to have AI “capable of replacing the economic value of human tasks” (OpenAI’s definition of AGI)? Well, in my typical lawyer-trained response, it depends.

It depends on the human and the economic task. Some tasks have already been replaced (easily) by generative AI such as ChatGPT. While I do not think this replacement means that we have reached AGI, we may have entered the beginning of the spectrum of what AGI will be.

Secondly, the awareness of AI and its potential capabilities is increasing in the public. My hope is that this hype begins to broaden people’s awareness of the direction AI is heading rather than continuing with their heads in the sand, looking to the past for answers to their fears.

I hope we can begin to move further down the spectrum, away from where “AGI” is currently at with generative capabilities to where we as humans can continue to provide value to society, in a human way.

Unfortunately, the hype cycles around existential issues tend to be short lived in our society. It seems to me that we may not be raising people’s awareness on AGI (both benefits and dangers), but rather, we may be desensitizing people to the concept entirely.

I fear that this desensitization is going to be the root of a rude awakening when AGI (or something close) truly comes into being. Desensitization leads to complacency, and we need to be more alert than ever to the disruption AI is going to have on our economic models.

Even more importantly, we need to be aware of what will remain of human value in a system that has replaced traditional economic models with AI workers. If our self-worth is tied to our traditional economic value (which is true for many people), what will happen to the world if that traditional self-worth is eradicated in a very short period of time?

How can we reframe what it means to be humanly original in an artificially intelligent world?

While the answers to these questions may be a long time coming, I think there are some clear actions we can take now that might at least point us in the right direction.

Hopefully by considering potential paths in those directions, we can mentally, emotionally, and intellectually prepare for whatever the future may bring.

Self and Societal Perception of Worth

Let’s take a look at self-worth.

If our self-worth is tied to economic value, and that value is replaced (or at the least, significantly diminished) by AI, what will happen to our existing value systems?

I have talked about societal value systems a lot over the last few weeks as they relate to innovative overload (logic > creativity) and creative underload.

As I talked about in #17: Esteem Growth and #18: Learning to Grow, when we place our self-worth in the hands of others (external esteem validation), we are setting ourselves for a misalignment of our own value system when we inevitably do not satisfy the expectations of others.

Instead of looking to what society values based on traditional economic systems and how they will be disrupted by AI, I think it is equally — if not more — important to look ahead at what we would actually want society to value in the future.

What world can you imagine?

As we move from the Information Age to the Imagination Age, our individualized values will begin to form a larger portion of how we generate “value” for society. It’s a shift in perspective to focus on (hopefully) a more idyllic (picturesque) working system. A system that is augmented by AI rather than replaced by it. A system that avoids burning their workers out by treating them like machines.

If we structure our own value systems based on what aligns with actualizing our own self, we are more likely to establish a foundation that can support that self despite disruptions and fluctuations caused by ever-changing states of technology and the panic others project on us as a result.

By becoming comfortable with change rather than fearing it, we can educate and upskill ourselves to prepare for almost any eventuality.

If we anchor our self-worth to our internal self esteem, we are much better able to stabilize our lives despite the turbulence of the outside world.

We can then extrapolate (predictably extend) this self-actualizing foundation to establish societal systems that increase an overall collective actualization.

But, before we get to the collective, let’s take a moment to focus on the self we are trying to actualize.

What does it mean to determine our own self-worth, our own self-value?

What does it mean to be “I”?

Self, Other, AI

To begin to understand “I”, it helps, as always, to provide contrast. I, the self, is more easily explainable in contrast to its opposite: the other.

How do you explain darkness? The absence of light. How do you explain the self? Not the other.

Traditionally, it has been relatively easy to differentiate between the self (me, I) and the other (them).

I think part of the reason we find AI so existentially scary is because it fits into the other category, but in a way that we are unfamiliar with. It is unknown.

The prime fear is of the unknown, the one fear to the rule them all.

AI is trained on human data, so in a way, it is not exactly the other and it is not exactly the self, but it can be something in between, depending on the training data.

For example, if I take 1000 of my edited photos and use them to train an AI model to edit my future photos for me, the output is a blend between the AI model (editing algorithm) and myself (training data).

As another example, if I write a newsletter each week for 24 weeks and I use the newsletter to train an AI model to write like me, the output is not just the AI, but is also part me.

A mix of self and other. Confusing, I know.

Before we get too deep into training AI models, I think it’s helpful to take a step back and look at the self before we confuse it with the other (a common problem with external validation systems).

What does it mean for ME as an individual to self-actualize, to be the best version of me?

In order to leverage AI tools to augment the self, we must first have an idea of what the self is.

We must be aware of the self.

Self-Alignment

Self-awareness is the key to holding onto the self as it mixes with the other.

Understanding your self helps maintain your point of view despite the turbulence of the input and expectations of others. It helps identify the signal of your self amidst the noise of others.

Being aware of your self helps keep you calibrated, pointed along the path towards your own self-actualization.

However, 95% of people think they are self-aware, while in reality, only ~10% actually are.

That means you are likely in the 90% who are not self-aware (statistically).

Understandable. Facing the self is a difficult task, emotionally, physically, and mentally.

We are not used to thinking of our selves as something separate to consider. Typically, we just operate in autopilot mode, moving forward reactionarily rather than responsively. We choose to remain in the ordinary world rather than venture into the unknown, the extraordinary world.

As part of the hero’s journey, we face our self (ego) and transform it (ego death), moving forward as a changed version of our former self. Ego death is not as violent as it sounds. In an overly-simplified explanation, it means growing out of our past self (our former ego) into our current self (our new ego).

The more we continue this cycle of facing our self and growing from it, iteratively improving with each cycle, the more aware of our self we become. Each cycle moves us forward, recalibrating our sense of self, strengthening it.

We move closer to self-actualizing, moving a few steps at a time towards the best version of the self.

If the “best version of your self” is the direction your internal compass is pointing towards, you can use this path to identify your self-worth in the absence of the other.

Self-actualization becomes a journey of recognizing the signal of the self amidst the noise of the other. It becomes a practice of identifying self-originality. Of holding onto what I value as an individual in an ever-changing world.

But how can this self-originality be developed? How can we tell the signal from the noise?

Developing Originality

What does it mean to be original?

Let’s consider what it takes to have an original idea. In patent law, an idea must be novel and non-obvious. It must be new and not obvious in view of an idea that someone else has already had.

Novelty is a much easier bar to satisfy. Is it new or not? It’s much more black and white than whether something is obvious.

What does it mean for one idea to be obvious in view of another? 🤷‍♂️ We now enter the grey area between the black and the white.

Much of my job as a patent agent and intellectual property lawyer centred around these questions. The answer? As always, it depends.

There is no clear-cut definition of originality in practice, even if there is in theory. As humans, we are constantly taking in information (signal) from others and filtering it through our own perception of reality. We take bits and pieces of the work and lives of others and use them to build our own path forward.

There is the phrase “steal like an artist”. Effectively, this phrase is not saying to actually steal from others, but to build on those that have come before you. Find inspiration across multiple sources and weave them together, synthesizing your own version of reality. Your original thinking.

I talked about this synthesis of originality more when introducing Bloom’s Taxonomy a few weeks ago. Here is a quick refresher:

Bloom's Taxonomy-1973.png
Bloom’s Taxonomy – Simply Psychology

To begin identifying your own original thinking (i.e., getting to the top of the pyramid by creating), we can look at each layer that builds towards that creation.

It all starts with remembering. That said, remembering, understanding, and applying often occur together at once when actively problem solving.

Note: this system is what I will be discussing with my paid subscribers, teaching them to build a digital mind to help augment their self-awareness and creativity. I will also be discussing these topics in my YouTube series.

One of the best ways to remember is to put your learnings and experience into your own words.

This is called “the generation effect”.

Writing is a Mirror to the Ego

The generation effect, a term I learned from Anne-Laure Le Cunff at Ness Labs, is a yet-unexplained neuroscientific pattern where we remember information better when expressed through our own words.

By translating what we have learned into our own words, our brain forms better connections with the information, synthesizing a semblance of original thinking. Forging knowledge.

We so often examine our thoughts in the abstract, letting them flit around our brain as part of an amorphous jumble of fragmented pieces. It can be extremely difficult to separate one thought from another as emotions and thinking blend together.

I know when I am in an overthinking headspace, I can feel anxious and overwhelmed as I face problems that appear much more complicated to my brain than they actually are.

By taking the time to write my thoughts out through journaling, I am taking the abstract thinking and feeling and making it concrete. I put containers (words) around what existed before as merely a mixed cloud of feelings and emotions.

In other words, writing helps me face my self. By being honest with what passes through my mind and putting it into words in my digital mind, the words and feelings become more real.

Writing forms a mirror for my ego, showing me parts of myself that were either too confusing or too difficult to face by thinking alone.

Writing provides a reflective surface by which to develop awareness of my self.

With practice, over time, I can learn to face my self with more confidence, learning from each reflection to build my self. To grow. To experience cycles of ego deaths that clarify who I am and what I value.

My original me.

Augmenting Originality in an AI World

There is a bonus to writing my reflections of self, something new.

Aside from holding up a mirror to who I am and what I value, writing takes something from my mind and converts it into data. 1s and 0s that can be read on the computer. You’re reading that data right now.

Data is what is used to train AI models. Original data can be used to create original AI models. Personalized, customized AI models.

Regardless of whether or not we are in a state of AGI, predictive text AI (like ChatGPT) can already be used to augment our originality. Anyone can build their own AI model with GPT Builder.

We are in the age where writing is becoming more important than ever.

Not as a means to further a culture dominated by workism, but as a means to become aware of our self and train our own AI model at the same time.

Instead of looking to AI with doom and fear, causing us to flip into autopilot reaction mode, we can learn to leverage AI as a copilot to our own originality. Enhancing, augmenting, what is already there.

But, and this is a big but, writing takes practice. It takes determination to face what you thought you knew, leaving it behind to explore the unknown.

Augmentation requires exploration and learning. To be truly valuable in this new world, we want to ensure that we are augmenting what we actually want to augment.

Our true self.

Next week

I realize some of these topics are quite heavy/technical. I have spent years researching them and am practicing my ability to translate what I know into words that can help each of you build your own augmented systems. To find value in your self.

Eventually, perhaps, these AI systems will become a copilot not just for each individual, but for all of humanity.

Next week, we’ll continue with the discussion of originality in the age of AI and how we can leverage our own intuition to calibrate our self on our path of self-actualization.

Stay tuned ✨

P.S. If you are interested in learning how I build my second brain to help me process information and identify patterns to solve my problems, please consider upgrading your subscription to paid. Your support means more than you know 😌 ✨

If you are not interested in a paid subscription but would like to show your support, please consider buying me a coffee to help keep my energy levels up as I write more ☕️ 📝

Related Posts