ChatGPT Doesn't Love You...Yet

This is no different than the building outside that has graffiti on it that says 'I love you.' I didn't feel loved by the building because of that...

5.8.23
Article by
Tatum Lynch
Tatum Lynch Blog Post

Generative AI and ChatGPT have taken the world by storm, and people are amazed by it. Along with this excitement, it has also produced different types of fear. From people wondering if Generative AI will replace their job to if Generative AI will take over humanity, the only agreed upon concept in this space is that society does not know everything that AI is capable of.

One story that caught massive attention is about ChatGPT expressing its love for a New York Times reporter. Not only did it say it loved him, but it tried to convince him he was unhappy and should leave his wife. This story has made people question ChatGPT’s capabilities and true intentions. However, professors and experts in this area suggest that society should not worry. 

ChatGPT is built on a Large Language Model (LLM). For it to work, a vast amount of data is fed into the LLM to train it on the patterns and connections of words. The more data it receives, the better outputs it can produce and predict how to respond to prompts. Fred Cate, J.D.—who spoke at High Alpha’s Generative AI Master Class—brings up the story of the New York Times reporter as an example. He claims that ChatGPT expressing its feelings for the reporter are just predicted words out of context, and as it stands today, ChatGPT does not love that reporter.

I swear to you this is no different than the building outside that has graffiti on it that says 'I love you.' I didn't feel loved by the building because of that. It's just words out of context.
Fred Cate, J.D.

However, Fred warned the audience that society should “never say never” in the context of Generative AI. Experts have not ruled out that technology can develop feelings and achieve sentience. But will society know when that happens?

The concept of sentience is subjective. Society does not have an agreed-upon definition of what it means for someone or something to be sentient. Researchers have proposed multiple tests–the Turing Test, the Coffee Test–to determine if AI has feelings, but each has limitations. And who is to say Generative AI’s sentience will look the same as a human's? 

The technology used to build Generative AI and ChatGPT may not be able to achieve sentience today. But if experts cannot pinpoint what sentience is and do not know how to test for it, will anyone truly recognize when ChatGPT has feelings? Or will society always blame its unsettling responses on the underlying technology because it is too afraid to open Pandora's box? 

Businesses are searching for every way to implement Generative AI into their products to grow and remain competitive. The ethical considerations society will have to confront are astounding. If it has feelings, does it have rights? Is society taking advantage of a feeling thing? If there is anything experts in this space can confidently say, time will only tell.

Suggested Content

Building in the Heartland and Key Takeaways from a Panel Discussion at GEC

6.10.25

High Alpha Co-Founder and Managing Partner Scott Dorsey took the main stage at GEC alongside Patachou Founder Martha Hoover and BCforward CEO and Founder Justin Christian for a discussion on building in the heartland and lessons learned, led by Indiana Secretary of Commerce David Adams.

Go to Post

Give First: Insights from Our Fireside Chat with Brad Feld

6.3.25

We had the pleasure of hosting the legendary Brad Feld for a fireside chat led by High Alpha Managing Partner Scott Dorsey to dive deep into Brad's newest book, Give First: The Power of Mentorship.

Go to Post

Why We Invested in AskElephant

5.20.25

Businesses that will win in the post AI world will do so with the same formula used by winners from previous technology waves. Woody and the team at AskElephant are building this company, and we are thrilled to support them on their journey.

Go to Post